亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

As blockchains continue to seek to scale to a larger number of nodes, the communication complexity of protocols has become a significant priority as the network can quickly become overburdened. Several schemes have attempted to address this, one of which uses coded computation to lighten the load. Here we seek to address one issue with all such coded blockchain schemes known to the authors: transaction confirmation. In a coded blockchain, only the leader has access to the uncoded block, while the nodes receive encoded data that makes it effectively impossible for them to identify which transactions were included in the block. As a result, a Byzantine leader might choose not to notify a sender or receiver of a transaction that the transaction went into the block, and even with an honest leader, they would not be able to produce a proof of a transaction's inclusion. To address this, we have constructed a protocol to send the nodes enough information so that a client sending or receiving a transaction is guaranteed to not only be notified but also to receive a proof of that transaction's inclusion in the block. Crucially, we do this without substantially increasing the bit complexity of the original coded blockchain protocol.

相關內容

區塊鏈(Blockchain)是由節(jie)點參與(yu)的(de)分布式數據庫系統,它的(de)特(te)點是不(bu)可更改,不(bu)可偽造,也可以(yi)將其(qi)(qi)理解為賬簿系統(ledger)。它是比(bi)特(te)幣(bi)的(de)一個(ge)重要概(gai)念,完(wan)整比(bi)特(te)幣(bi)區塊鏈的(de)副本,記錄了其(qi)(qi)代幣(bi)(token)的(de)每一筆交易(yi)。通過(guo)這些信(xin)息(xi),我們可以(yi)找(zhao)到每一個(ge)地址,在歷史上任何(he)一點所擁有(you)的(de)價值。

知識薈萃

精品入門和進階教程、論文和代碼整(zheng)理等

更多

查(cha)看相關VIP內容(rong)、論文(wen)、資訊等(deng)

Integrating coded caching (CC) techniques into multi-input multi-output (MIMO) setups provides a substantial performance boost in terms of the achievable degrees of freedom (DoF). In this paper, we study cache-aided MIMO setups where a single server with $L$ transmit antennas communicates with a number of users each with $G$ receive antennas. We extend a baseline CC scheme, originally designed for multi-input single-output (MISO) systems, to the considered MIMO setup. However, in a proposed MIMO approach, instead of merely replicating the transmit strategy from the baseline MISO scheme, we adjust the number of users served in each transmission to maximize the achievable DoF. This approach not only makes the extension more flexible in terms of supported network parameters but also results in an improved DoF of $\max_{\beta \le G} \beta \lfloor \frac{L-1}{\beta} \rfloor + \beta (t+1)$, where $t$ is the coded caching gain. In addition, we also propose a high-performance multicast transmission design for the considered MIMO-CC setup by formulating a symmetric rate maximization problem in terms of the transmit covariance matrices for the multicast signals and solving the resulting non-convex problem using successive convex approximation. Finally, we use numerical simulations to verify both improved DoF results and enhanced MIMO multicasting performance.

Deep neural networks (DNNs) have made significant progress, but often suffer from fairness issues, as deep models typically show distinct accuracy differences among certain subgroups (e.g., males and females). Existing research addresses this critical issue by employing fairness-aware loss functions to constrain the last-layer outputs and directly regularize DNNs. Although the fairness of DNNs is improved, it is unclear how the trained network makes a fair prediction, which limits future fairness improvements. In this paper, we investigate fairness from the perspective of decision rationale and define the parameter parity score to characterize the fair decision process of networks by analyzing neuron influence in various subgroups. Extensive empirical studies show that the unfair issue could arise from the unaligned decision rationales of subgroups. Existing fairness regularization terms fail to achieve decision rationale alignment because they only constrain last-layer outputs while ignoring intermediate neuron alignment. To address the issue, we formulate the fairness as a new task, i.e., decision rationale alignment that requires DNNs' neurons to have consistent responses on subgroups at both intermediate processes and the final prediction. To make this idea practical during optimization, we relax the naive objective function and propose gradient-guided parity alignment, which encourages gradient-weighted consistency of neurons across subgroups. Extensive experiments on a variety of datasets show that our method can significantly enhance fairness while sustaining a high level of accuracy and outperforming other approaches by a wide margin.

Eddy detection is a critical task for ocean scientists to understand and analyze ocean circulation. In this paper, we introduce a hybrid eddy detection approach that combines sea surface height (SSH) and velocity fields with geometric criteria defining eddy behavior. Our approach searches for SSH minima and maxima, which oceanographers expect to find at the center of eddies. Geometric criteria are used to verify expected velocity field properties, such as net rotation and symmetry, by tracing velocity components along a circular path surrounding each eddy center. Progressive searches outward and into deeper layers yield each eddy's 3D region of influence. Isolation of each eddy structure from the dataset, using it's cylindrical footprint, facilitates visualization of internal eddy structures using horizontal velocity, vertical velocity, temperature and salinity. A quantitative comparison of Okubo-Weiss vorticity (OW) thresholding, the standard winding angle, and this new SSH-velocity hybrid methods of eddy detection as applied to the Red Sea dataset suggests that detection results are highly dependent on the choices of method, thresholds, and criteria. Our new SSH-velocity hybrid detection approach has the advantages of providing eddy structures with verified rotation properties, 3D visualization of the internal structure of physical properties, and rapid efficient estimations of eddy footprints without calculating streamlines. Our approach combines visualization of internal structure and tracking overall movement to support the study of the transport mechanisms key to understanding the interaction of nutrient distribution and ocean circulation. Our method is applied to three different datasets to showcase the generality of its application.

Most link prediction methods return estimates of the connection probability of missing edges in a graph. Such output can be used to rank the missing edges, from most to least likely to be a true edge, but it does not directly provide a classification into true and non-existent. In this work, we consider the problem of identifying a set of true edges with a control of the false discovery rate (FDR). We propose a novel method based on high-level ideas from the literature on conformal inference. The graph structure induces intricate dependence in the data, which we carefully take into account, as this makes the setup different from the usual setup in conformal inference, where exchangeability is assumed. The FDR control is empirically demonstrated for both simulated and real data.

This study addresses a fundamental, yet overlooked, gap between standard theory and empirical modelling practices in the OLS regression model $\boldsymbol{y}=\boldsymbol{X\beta}+\boldsymbol{u}$ with collinearity. In fact, while an estimated model in practice is desired to have stability and efficiency in its "individual OLS estimates", $\boldsymbol{y}$ itself has no capacity to identify and control the collinearity in $\boldsymbol{X}$ and hence no theory including model selection process (MSP) would fill this gap unless $\boldsymbol{X}$ is controlled in view of sampling theory. In this paper, first introducing a new concept of "empirically effective modelling" (EEM), we propose our EEM methodology (EEM-M) as an integrated process of two MSPs with data $(\boldsymbol{y^o,X})$ given. The first MSP uses $\boldsymbol{X}$ only, called the XMSP, and pre-selects a class $\scr{D}$ of models with individually inefficiency-controlled and collinearity-controlled OLS estimates, where the corresponding two controlling variables are chosen from predictive standard error of each estimate. Next, defining an inefficiency-collinearity risk index for each model, a partial ordering is introduced onto the set of models to compare without using $\boldsymbol{y^o}$, where the better-ness and admissibility of models are discussed. The second MSP is a commonly used MSP that uses $(\boldsymbol{y^o,X})$, and evaluates total model performance as a whole by such AIC, BIC, etc. to select an optimal model from $\scr{D}$. Third, to materialize the XMSP, two algorithms are proposed.

Complex Query Answering (CQA) is an important and fundamental task for knowledge graph (KG) reasoning. Query encoding (QE) is proposed as a fast and robust solution to CQA. In the encoding process, most existing QE methods first parse the logical query into an executable computational direct-acyclic graph (DAG), then use neural networks to parameterize the operators, and finally, recursively execute these neuralized operators. However, the parameterization-and-execution paradigm may be potentially over-complicated, as it can be structurally simplified by a single neural network encoder. Meanwhile, sequence encoders, like LSTM and Transformer, proved to be effective for encoding semantic graphs in related tasks. Motivated by this, we propose sequential query encoding (SQE) as an alternative to encode queries for CQA. Instead of parameterizing and executing the computational graph, SQE first uses a search-based algorithm to linearize the computational graph to a sequence of tokens and then uses a sequence encoder to compute its vector representation. Then this vector representation is used as a query embedding to retrieve answers from the embedding space according to similarity scores. Despite its simplicity, SQE demonstrates state-of-the-art neural query encoding performance on FB15k, FB15k-237, and NELL on an extended benchmark including twenty-nine types of in-distribution queries. Further experiment shows that SQE also demonstrates comparable knowledge inference capability on out-of-distribution queries, whose query types are not observed during the training process.

Stealth addresses represent an approach to enhancing privacy within public and distributed blockchains, such as Ethereum and Bitcoin. Stealth address protocols generate a distinct, randomly generated address for the recipient, thereby concealing interactions between entities. In this study, we introduce BaseSAP, an autonomous base-layer protocol for embedding stealth addresses within the application layer of programmable blockchains. BaseSAP expands upon previous research to develop a modular protocol for executing unlikable transactions on public blockchains. BaseSAP allows for developing additional stealth address layers using different cryptographic algorithms on top of the primary implementation, capitalizing on its modularity. To demonstrate the effectiveness of our proposed protocol, we present simulations of an advanced Secp256k1-based dual-key stealth address protocol. This protocol is designed on top of BaseSAP and is deployed on the Goerli and Sepolia test networks as the first prototype implementation. Furthermore, we provide cost analyses and underscore potential security ramifications and attack vectors that could affect the privacy of stealth addresses. Our study reveals the flexibility of the BaseSAP protocol and offers insight into the broader implications of stealth address technology.

Following the increasing trends of malicious applications or cyber threats in general, program analysis has become a ubiquitous technique in extracting relevant features. The current state-of-the-art solutions seem to fall behind new techniques. For instance, dynamic binary instrumentation (DBI) provides some promising results, but falls short when it comes to ease of use and overcoming analysis evasion. In this regard, we propose a two-fold contribution. First, we introduce COBAI (Complex Orchestrator for Binary Analysis and Instrumentation), a DBI framework designed for malware analysis, prioritizing ease-of-use and analysis transparency, without imposing a significant overhead. Second, we introduce an aggregated test suite intended to stand as a benchmark in determining the quality of an analysis solution regarding the protection against evasion mechanisms. The efficiency of our solution is validated by a careful evaluation taking into consideration other DBI frameworks, analysis environments, and the proposed benchmark.

Continued model-based decision support is associated with particular challenges, especially in long-term projects. Due to the regularly changing questions and the often changing understanding of the underlying system, the models used must be regularly re-evaluated, -modelled and -implemented with respect to changing modelling purpose, system boundaries and mapped causalities. Usually, this leads to models with continuously growing complexity and volume. In this work we aim to reevaluate the idea of the model family, dating back to the 1990s, and use it to promote this as a mindset in the creation of decision support frameworks in large research projects. The idea is to generally not develop and enhance a single standalone model, but to divide the research tasks into interacting smaller models which specifically correspond to the research question. This strategy comes with many advantages, which we explain using the example of a family of models for decision support in the COVID-19 crisis and corresponding success stories. We describe the individual models, explain their role within the family, and how they are used - individually and with each other.

As Large Language Models (LLMs) become increasingly integrated into our everyday lives, understanding their ability to comprehend human mental states becomes critical for ensuring effective interactions. However, despite the recent attempts to assess the Theory-of-Mind (ToM) reasoning capabilities of LLMs, the degree to which these models can align with human ToM remains a nuanced topic of exploration. This is primarily due to two distinct challenges: (1) the presence of inconsistent results from previous evaluations, and (2) concerns surrounding the validity of existing evaluation methodologies. To address these challenges, we present a novel framework for procedurally generating evaluations with LLMs by populating causal templates. Using our framework, we create a new social reasoning benchmark (BigToM) for LLMs which consists of 25 controls and 5,000 model-written evaluations. We find that human participants rate the quality of our benchmark higher than previous crowd-sourced evaluations and comparable to expert-written evaluations. Using BigToM, we evaluate the social reasoning capabilities of a variety of LLMs and compare model performances with human performance. Our results suggest that GPT4 has ToM capabilities that mirror human inference patterns, though less reliable, while other LLMs struggle.

北京阿比特科技有限公司