A proof-of-randomness (PoR) protocol is a fair and low energy-cost consensus mechanism for blockchains. Each network node of a blockchain may use a true random number generator (TRNG) and hash algorism to fulfil the PoR protocol. In this whitepaper, we give the consensus mechanism of the PoR protocol, and classify it into a new kind of randomized algorithms called Macau. The PoR protocol could generate a blockchain without any competition of computing power or stake of cryptocurrency. Besides, we give some advantages of integrating quantum random number generator (QRNG) chips into hardware wallets, and also discuss the way to cooperate with quantum key distribution (QKD) technology.
Real world data is an increasingly utilized resource for post-market monitoring of vaccines and provides insight into real world effectiveness. However, outside of the setting of a clinical trial, heterogeneous mechanisms may drive observed breakthrough infection rates among vaccinated individuals; for instance, waning vaccine-induced immunity as time passes and the emergence of a new strain against which the vaccine has reduced protection. Analyses of infection incidence rates are typically predicated on a presumed mechanism in their choice of an "analytic time zero" after which infection rates are modeled. In this work, we propose an explicit test for driving mechanism situated in a standard Cox proportional hazards framework. We explore the test's performance in simulation studies and in an illustrative application to real world data. We additionally introduce subgroup differences in infection incidence and evaluate the impact of time zero misspecification on bias and coverage of model estimates. In this study we observe strong power and controlled type I error of the test to detect the correct infection-driving mechanism under various settings. Similar to previous studies, we find mitigated bias and greater coverage of estimates when the analytic time zero is correctly specified or accounted for.
We propose a local modification of the standard subdiffusion model by introducing the initial Fickian diffusion, which results in a multiscale diffusion model. The developed model resolves the incompatibility between the nonlocal operators in subdiffusion and the local initial conditions and thus eliminates the initial singularity of the solutions of the subdiffusion, while retaining its heavy tail behavior away from the initial time. The well-posedness of the model and high-order regularity estimates of its solutions are analyzed by resolvent estimates, based on which the numerical discretization and analysis are performed. Numerical experiments are carried out to substantiate the theoretical findings.
A new approach to the local and global explanation is proposed. It is based on selecting a convex hull constructed for the finite number of points around an explained instance. The convex hull allows us to consider a dual representation of instances in the form of convex combinations of extreme points of a produced polytope. Instead of perturbing new instances in the Euclidean feature space, vectors of convex combination coefficients are uniformly generated from the unit simplex, and they form a new dual dataset. A dual linear surrogate model is trained on the dual dataset. The explanation feature importance values are computed by means of simple matrix calculations. The approach can be regarded as a modification of the well-known model LIME. The dual representation inherently allows us to get the example-based explanation. The neural additive model is also considered as a tool for implementing the example-based explanation approach. Many numerical experiments with real datasets are performed for studying the approach. The code of proposed algorithms is available.
Understanding fluid movement in multi-pored materials is vital for energy security and physiology. For instance, shale (a geological material) and bone (a biological material) exhibit multiple pore networks. Double porosity/permeability models provide a mechanics-based approach to describe hydrodynamics in aforesaid porous materials. However, current theoretical results primarily address steady-state response, and their counterparts in the transient regime are still wanting. The primary aim of this paper is to fill this knowledge gap. We present three principal properties -- with rigorous mathematical arguments -- that the solutions under the double porosity/permeability model satisfy in the transient regime: backward-in-time uniqueness, reciprocity, and a variational principle. We employ the ``energy method'' -- by exploiting the physical total kinetic energy of the flowing fluid -- to establish the first property and Cauchy-Riemann convolutions to prove the next two. The results reported in this paper -- that qualitatively describe the dynamics of fluid flow in double-pored media -- have (a) theoretical significance, (b) practical applications, and (c) considerable pedagogical value. In particular, these results will benefit practitioners and computational scientists in checking the accuracy of numerical simulators. The backward-in-time uniqueness lays a firm theoretical foundation for pursuing inverse problems in which one predicts the prescribed initial conditions based on data available about the solution at a later instance.
It is known that standard stochastic Galerkin methods encounter challenges when solving partial differential equations with high-dimensional random inputs, which are typically caused by the large number of stochastic basis functions required. It becomes crucial to properly choose effective basis functions, such that the dimension of the stochastic approximation space can be reduced. In this work, we focus on the stochastic Galerkin approximation associated with generalized polynomial chaos (gPC), and explore the gPC expansion based on the analysis of variance (ANOVA) decomposition. A concise form of the gPC expansion is presented for each component function of the ANOVA expansion, and an adaptive ANOVA procedure is proposed to construct the overall stochastic Galerkin system. Numerical results demonstrate the efficiency of our proposed adaptive ANOVA stochastic Galerkin method for both diffusion and Helmholtz problems.
Most existing neural network-based approaches for solving stochastic optimal control problems using the associated backward dynamic programming principle rely on the ability to simulate the underlying state variables. However, in some problems, this simulation is infeasible, leading to the discretization of state variable space and the need to train one neural network for each data point. This approach becomes computationally inefficient when dealing with large state variable spaces. In this paper, we consider a class of this type of stochastic optimal control problems and introduce an effective solution employing multitask neural networks. To train our multitask neural network, we introduce a novel scheme that dynamically balances the learning across tasks. Through numerical experiments on real-world derivatives pricing problems, we prove that our method outperforms state-of-the-art approaches.
We introduce quaternary modified four $\mu$-circulant codes as a modification of four circulant codes. We give basic properties of quaternary modified four $\mu$-circulant Hermitian self-dual codes. We also construct quaternary modified four $\mu$-circulant Hermitian self-dual codes having large minimum weights. Two quaternary Hermitian self-dual $[56,28,16]$ codes are constructed for the first time. These codes improve the previously known lower bound on the largest minimum weight among all quaternary (linear) $[56,28]$ codes. In addition, these codes imply the existence of a quantum $[[56,0,16]]$ code.
In arXiv:2305.03945 [math.NA], a first-order optimization algorithm has been introduced to solve time-implicit schemes of reaction-diffusion equations. In this research, we conduct theoretical studies on this first-order algorithm equipped with a quadratic regularization term. We provide sufficient conditions under which the proposed algorithm and its time-continuous limit converge exponentially fast to a desired time-implicit numerical solution. We show both theoretically and numerically that the convergence rate is independent of the grid size, which makes our method suitable for large-scale problems. The efficiency of our algorithm has been verified via a series of numerical examples conducted on various types of reaction-diffusion equations. The choice of optimal hyperparameters as well as comparisons with some classical root-finding algorithms are also discussed in the numerical section.
With the increasing adoption of decentralized information systems based on a variety of permissionless blockchain networks, the choice of consensus mechanism is at the core of many controversial discussions. Ethereum's recent transition from (PoW) to proof-of-stake (PoS)-based consensus has further fueled the debate on which mechanism is more favorable. While the aspects of energy consumption and degree of (de-)centralization are often emphasized in the public discourse, seminal research has also shed light on the formal security aspects of both approaches individually. However, related work has not yet comprehensively structured the knowledge about the security properties of PoW and PoS. Rather, it has focused on in-depth analyses of specific protocols or high-level comparative reviews covering a broad range of consensus mechanisms. To fill this gap and unravel the commonalities and discrepancies between the formal security properties of PoW- and PoS-based consensus, we conduct a systematic literature review over 26 research articles. Our findings indicate that PoW-based consensus with the longest chain rule provides the strongest formal security guarantees. Nonetheless, PoS can achieve similar guarantees when addressing its more pronounced tradeoff between safety and liveness through hybrid approaches.
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.