亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Nakamoto consensus has been incredibly influential in enabling robust blockchain systems, and one of its components is the so-called heaviest chain rule (HCR). Within this rule, the calculation of the weight of the chain tip is performed by adding the difficulty threshold value to the previous total difficulty. Current difficulty based weighting systems do not take the intrinsic block weight into account. This paper studies a new mechanism based on entropy differences, named proof of entropy minima (POEM), which incorporates the intrinsic block weight in a manner that significantly reduces the orphan rate of the blockchain while simultaneously accelerating finalization. Finally, POEM helps to understand blockchain as a static time-independent sequence of committed events.

相關內容

The Network Scale-up Method (NSUM) uses social networks and answers to "How many X's do you know?" questions to estimate hard-to-reach population sizes. This paper focuses on two biases associated with the NSUM. First, different populations are known to have different average social network sizes, introducing degree ratio bias. This is especially true for marginalized populations like sex workers and drug users, where members tend to have smaller social networks than the average person. Second, large subpopulations are weighted more heavily than small subpopulations in current NSUM estimators, leading to poor size estimates of small subpopulations. We show how the degree ratio affects size estimates, provide a method to estimate degree ratios without collecting additional data, and demonstrate that rescaling size estimates improves the estimates for smaller subpopulations. We demonstrate that our adjustment procedures improve the accuracy of NSUM size estimates using simulations and data from two data sources.

Many organisations manage service quality and monitor a large set devices and servers where each entity is associated with telemetry or physical sensor data series. Recently, various methods have been proposed to detect behavioural anomalies, however existing approaches focus on multivariate time series and ignore communication between entities. Moreover, we aim to support end-users in not only in locating entities and sensors causing an anomaly at a certain period, but also explain this decision. We propose a scalable approach to detect anomalies using a two-step approach. First, we recover relations between entities in the network, since relations are often dynamic in nature and caused by an unknown underlying process. Next, we report anomalies based on an embedding of sequential patterns. Pattern mining is efficient and supports interpretation, i.e. patterns represent frequent occurring behaviour in time series. We extend pattern mining to filter sequential patterns based on frequency, temporal constraints and minimum description length. We collect and release two public datasets for international broadcasting and X from an Internet company. \textit{BAD} achieves an overall F1-Score of 0.78 on 9 benchmark datasets, significantly outperforming the best baseline by 3\%. Additionally, \textit{BAD} is also an order-of-magnitude faster than state-of-the-art anomaly detection methods.

This white paper was written by the members of the Work Group focusing on design practices of the COST Action 18230 - Interactive Narrative Design for Complexity Representation (INDCOR, WG1). It presents an overview of Interactive Digital Narratives (IDNs) design for complexity representations through IDN workflows and methodologies, IDN authoring tools and applications. It provides definitions of the central elements of the IDN alongside its best practices, designs and methods. Finally, it describes complexity as a feature of IDN, with related examples. In summary, this white paper serves as an orienting map for the field of IDN design, understanding where we are in the contemporary panorama while charting the grounds of their promising futures.

Gun violence is a major problem in contemporary American society, with tens of thousands injured each year. However, relatively little is known about the effects on family members and how effects vary across subpopulations. To study these questions and, more generally, to address a gap in the causal inference literature, we present a framework for the study of effect modification or heterogeneous treatment effects in difference-in-differences designs. We implement a new matching technique, which combines profile matching and risk set matching, to (i) preserve the time alignment of covariates, exposure, and outcomes, avoiding pitfalls of other common approaches for difference-in-differences, and (ii) explicitly control biases due to imbalances in observed covariates in subgroups discovered from the data. Our case study shows significant and persistent effects of nonfatal firearm injuries on several health outcomes for those injured and on the mental health of their family members. Sensitivity analyses reveal that these results are moderately robust to unmeasured confounding bias. Finally, while the effects for those injured are modified largely by the severity of the injury and its documented intent, for families, effects are strongest for those whose relative's injury is documented as resulting from an assault, self-harm, or law enforcement intervention.

For terminal value problems of fractional differential equations of order $\alpha \in (0,1)$ that use Caputo derivatives, shooting methods are a well developed and investigated approach. Based on recently established analytic properties of such problems, we develop a new technique to select the required initial values that solves such shooting problems quickly and accurately. Numerical experiments indicate that this new proportional secting technique converges very quickly and accurately to the solution. Run time measurements indicate a speedup factor of between 4 and 10 when compared to the standard bisection method.

Defect prediction is crucial for software quality assurance and has been extensively researched over recent decades. However, prior studies rarely focus on data complexity in defect prediction tasks, and even less on understanding the difficulties of these tasks from the perspective of data complexity. In this paper, we conduct an empirical study to estimate the hardness of over 33,000 instances, employing a set of measures to characterize the inherent difficulty of instances and the characteristics of defect datasets. Our findings indicate that: (1) instance hardness in both classes displays a right-skewed distribution, with the defective class exhibiting a more scattered distribution; (2) class overlap is the primary factor influencing instance hardness and can be characterized through feature, structural, instance, and multiresolution overlap; (3) no universal preprocessing technique is applicable to all datasets, and it may not consistently reduce data complexity, fortunately, dataset complexity measures can help identify suitable techniques for specific datasets; (4) integrating data complexity information into the learning process can enhance an algorithm's learning capacity. In summary, this empirical study highlights the crucial role of data complexity in defect prediction tasks, and provides a novel perspective for advancing research in defect prediction techniques.

The entropy production rate is a central quantity in non-equilibrium statistical physics, scoring how far a stochastic process is from being time-reversible. In this paper, we compute the entropy production of diffusion processes at non-equilibrium steady-state under the condition that the time-reversal of the diffusion remains a diffusion. We start by characterising the entropy production of both discrete and continuous-time Markov processes. We investigate the time-reversal of time-homogeneous stationary diffusions and recall the most general conditions for the reversibility of the diffusion property, which includes hypoelliptic and degenerate diffusions, and locally Lipschitz vector fields. We decompose the drift into its time-reversible and irreversible parts, or equivalently, the generator into symmetric and antisymmetric operators. We show the equivalence with a decomposition of the backward Kolmogorov equation considered in hypocoercivity theory, and a decomposition of the Fokker-Planck equation in GENERIC form. The main result shows that when the time-irreversible part of the drift is in the range of the volatility matrix (almost everywhere) the forward and time-reversed path space measures of the process are mutually equivalent, and evaluates the entropy production. When this does not hold, the measures are mutually singular and the entropy production is infinite. We verify these results using exact numerical simulations of linear diffusions. We illustrate the discrepancy between the entropy production of non-linear diffusions and their numerical simulations in several examples and illustrate how the entropy production can be used for accurate numerical simulation. Finally, we discuss the relationship between time-irreversibility and sampling efficiency, and how we can modify the definition of entropy production to score how far a process is from being generalised reversible.

Given a matroid $M=(E,{\cal I})$, and a total ordering over the elements $E$, a broken circuit is a circuit where the smallest element is removed and an NBC independent set is an independent set in ${\cal I}$ with no broken circuit. The set of NBC independent sets of any matroid $M$ define a simplicial complex called the broken circuit complex which has been the subject of intense study in combinatorics. Recently, Adiprasito, Huh and Katz showed that the face of numbers of any broken circuit complex form a log-concave sequence, proving a long-standing conjecture of Rota. We study counting and optimization problems on NBC bases of a generic matroid. We find several fundamental differences with the independent set complex: for example, we show that it is NP-hard to find the max-weight NBC base of a matroid or that the convex hull of NBC bases of a matroid has edges of arbitrary large length. We also give evidence that the natural down-up walk on the space of NBC bases of a matroid may not mix rapidly by showing that for some family of matroids it is NP-hard to count the number of NBC bases after certain conditionings.

In this article, we develop an interdisciplinary analysis of MEV which desires to merge the gap that exists between technical and legal research supporting policymakers in their regulatory decisions concerning blockchains, DeFi and associated risks. Consequently, this article is intended for both technical and legal audiences, and while we abstain from a detailed legal analysis, we aim to open a policy discussion regarding decentralized governance design at the block building layer as the place where MEV occurs. Maximal Extractable Value or MEV has been one of the major concerns in blockchain designs as it creates a centralizing force which ultimately affects user transactions. In this article, we dive into the technicality behind MEV, where we explain the concept behind the novel Proposal Builder Separation design as an effort by Flashbots to increase decentralization through modularity. We underline potential vulnerability factors under the PBS design, which open space for MEV extracting adversarial strategies by inside participants. We discuss the shift of trust from validators to builders in PoS blockchains such as Ethereum, acknowledging the impact that the later ones may have on users' transactions (in terms of front running) and censorship resistance (in terms of transaction inclusion). We recognize that under PBS, centralized (dominant) entities such as builders could potentially harm users by extracting MEV via front running strategies. Finally, we suggest adequate design and policy measures which could potentially mitigate these negative effects while protecting blockchain users.

Substantial progress has been made recently on developing provably accurate and efficient algorithms for low-rank matrix factorization via nonconvex optimization. While conventional wisdom often takes a dim view of nonconvex optimization algorithms due to their susceptibility to spurious local minima, simple iterative methods such as gradient descent have been remarkably successful in practice. The theoretical footings, however, had been largely lacking until recently. In this tutorial-style overview, we highlight the important role of statistical models in enabling efficient nonconvex optimization with performance guarantees. We review two contrasting approaches: (1) two-stage algorithms, which consist of a tailored initialization step followed by successive refinement; and (2) global landscape analysis and initialization-free algorithms. Several canonical matrix factorization problems are discussed, including but not limited to matrix sensing, phase retrieval, matrix completion, blind deconvolution, robust principal component analysis, phase synchronization, and joint alignment. Special care is taken to illustrate the key technical insights underlying their analyses. This article serves as a testament that the integrated consideration of optimization and statistics leads to fruitful research findings.

北京阿比特科技有限公司