亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

When analysing Quantum Key Distribution (QKD) protocols several metrics can be determined, but one of the most important is the Secret Key Rate. The Secret Key Rate is the number of bits per transmission that result in being part of a Secret Key between two parties. There are equations that give the Secret Key Rate, for example, for the BB84 protocol, equation 52 from [1, p.1032] gives the Secret Key Rate for a given Quantum Bit Error Rate (QBER). However, the analysis leading to equations such as these often rely on an Asymptotic approach, where it is assumed that an infinite number of transmissions are sent between the two communicating parties (henceforth denoted as Alice and Bob). In a practical implementation this is obviously impossible. Moreover, some QKD protocols belong to a category called Asymmetric protocols, for which it is significantly more difficult to perform such an analysis. As such, there is currently a lot of investigation into a different approach called the Finite-key regime. Work by Bunandar et al. [2] has produced code that used Semi-Definite Programming to produce lower bounds on the Secret Key Rate of even Asymmetric protocols. Our work looks at devising a novel QKD protocol taking inspiration from both the 3-state version of BB84 [3], and the Twin-Field protocol [4], and then using this code to perform analysis of the new protocol.

相關內容

We initiate the study of Bayesian conversations, which model interactive communication between two strategic agents without a mediator. We compare this to communication through a mediator and investigate the settings in which mediation can expand the range of implementable outcomes. In the first part of the paper, we ask whether the distributions of posterior beliefs that can be induced by a mediator protocol can also be induced by a (unmediated) Bayesian conversation. We show this is not possible -- mediator protocols can ``correlate'' the posteriors in a way that unmediated conversations cannot. Additionally, we provide characterizations of which distributions over posteriors are achievable via mediator protocols and Bayesian conversations. In the second part of the paper, we delve deeper into the eventual outcome of two-player games after interactive communication. We focus on games where only one agent has a non-trivial action and examine the performance of communication protocols that are individually rational (IR) for both parties. We consider different levels of IR including ex-ante, interim, and ex-post; and we impose different restrictions on how Alice and Bob can deviate from the protocol: the players are committed/non-committed. Our key findings reveal that, in the cases of ex-ante and interim IR, the expected utilities achievable through a mediator are equivalent to those achievable through unmediated Bayesian conversations. However, in the models of ex-post IR and non-committed interim IR, we observe a separation in the achievable outcomes.

Corruption is frequently observed in collected data and has been extensively studied in machine learning under different corruption models. Despite this, there remains a limited understanding of how these models relate such that a unified view of corruptions and their consequences on learning is still lacking. In this work, we formally analyze corruption models at the distribution level through a general, exhaustive framework based on Markov kernels. We highlight the existence of intricate joint and dependent corruptions on both labels and attributes, which are rarely touched by existing research. Further, we show how these corruptions affect standard supervised learning by analyzing the resulting changes in Bayes Risk. Our findings offer qualitative insights into the consequences of "more complex" corruptions on the learning problem, and provide a foundation for future quantitative comparisons. Applications of the framework include corruption-corrected learning, a subcase of which we study in this paper by theoretically analyzing loss correction with respect to different corruption instances.

Transcription is a complex phenomenon that permits the conversion of genetic information into phenotype by means of an enzyme called RNA polymerase, which erratically moves along and scans the DNA template. We perform Bayesian inference over a paradigmatic mechanistic model of non-equilibrium statistical physics, i.e., the asymmetric exclusion processes in the hydrodynamic limit, assuming a Gaussian process prior for the polymerase progression rate as a latent variable. Our framework allows us to infer the speed of polymerases during transcription given their spatial distribution, whilst avoiding the explicit inversion of the system's dynamics. The results, which show processing rates strongly varying with genomic position and minor role of traffic-like congestion, may have strong implications for the understanding of gene expression.

The classical way of extending an $[n, k, d]$ linear code $\C$ is to add an overall parity-check coordinate to each codeword of the linear code $\C$. The extended code, denoted by $\overline{\C}$ and called the standardly extended code of $\C$, is a linear code with parameters $[n+1, k, \bar{d}]$, where $\bar{d}=d$ or $\bar{d}=d+1$. This extending technique is one of the classical ways to construct a new linear code with a known linear code and a way to study the original code $\C$ via its extended code $\overline{\C}$. The standardly extended codes of some families of binary linear codes have been studied to some extent. However, not much is known about the standardly extended codes of nonbinary codes. For example, the minimum distances of the standardly extended codes of the nonbinary Hamming codes remain open for over 70 years. The first objective of this paper is to introduce the nonstandardly extended codes of a linear code and develop some general theory for extended linear codes. The second objective is to study the extended codes of several families of linear codes, including cyclic codes, projective two-weight codes and nonbinary Hamming codes. Many families of distance-optimal linear codes are obtained with the extending technique.

In many stochastic service systems, decision-makers find themselves making a sequence of decisions, with the number of decisions being unpredictable. To enhance these decisions, it is crucial to uncover the causal impact these decisions have through careful analysis of observational data from the system. However, these decisions are not made independently, as they are shaped by previous decisions and outcomes. This phenomenon is called sequential bias and violates a key assumption in causal inference that one person's decision does not interfere with the potential outcomes of another. To address this issue, we establish a connection between sequential bias and the subfield of causal inference known as dynamic treatment regimes. We expand these frameworks to account for the random number of decisions by modeling the decision-making process as a marked point process. Consequently, we can define and identify causal effects to quantify sequential bias. Moreover, we propose estimators and explore their properties, including double robustness and semiparametric efficiency. In a case study of 27,831 encounters with a large academic emergency department, we use our approach to demonstrate that the decision to route a patient to an area for low acuity patients has a significant impact on the care of future patients.

Evidence suggests that Free/Libre Open Source Software (FLOSS) environments provide unlimited learning opportunities. Community members engage in a number of activities both during their interaction with their peers and while making use of the tools available in these environments. A number of studies document the existence of learning processes in FLOSS through the analysis of surveys and questionnaires filled by FLOSS project participants. At the same time, the interest in understanding the dynamics of the FLOSS phenomenon, its popularity and success resulted in the development of tools and techniques for extracting and analyzing data from different FLOSS data sources. This new field is called Mining Software Repositories (MSR). In spite of these efforts, there is limited work aiming to provide empirical evidence of learning processes directly from FLOSS repositories. In this paper, we seek to trigger such an initiative by proposing an approach based on Process Mining to trace learning behaviors from FLOSS participants trails of activities, as recorded in FLOSS repositories, and visualize them as process maps. Process maps provide a pictorial representation of real behavior as it is recorded in FLOSS data. Our aim is to provide critical evidence that boosts the understanding of learning behavior in FLOSS communities by analyzing the relevant repositories. In order to accomplish this, we propose an effective approach that comprises first the mining of FLOSS repositories in order to generate Event logs, and then the generation of process maps, equipped with relevant statistical data interpreting and indicating the value of process discovery from these repos-itories

We propose an improved convergence analysis technique that characterizes the distributed learning paradigm of federated learning (FL) with imperfect/noisy uplink and downlink communications. Such imperfect communication scenarios arise in the practical deployment of FL in emerging communication systems and protocols. The analysis developed in this paper demonstrates, for the first time, that there is an asymmetry in the detrimental effects of uplink and downlink communications in FL. In particular, the adverse effect of the downlink noise is more severe on the convergence of FL algorithms. Using this insight, we propose improved Signal-to-Noise (SNR) control strategies that, discarding the negligible higher-order terms, lead to a similar convergence rate for FL as in the case of a perfect, noise-free communication channel while incurring significantly less power resources compared to existing solutions. In particular, we establish that to maintain the $O(\frac{1}{\sqrt{K}})$ rate of convergence like in the case of noise-free FL, we need to scale down the uplink and downlink noise by $\Omega({\sqrt{k}})$ and $\Omega({k})$ respectively, where $k$ denotes the communication round, $k=1,\dots, K$. Our theoretical result is further characterized by two major benefits: firstly, it does not assume the somewhat unrealistic assumption of bounded client dissimilarity, and secondly, it only requires smooth non-convex loss functions, a function class better suited for modern machine learning and deep learning models. We also perform extensive empirical analysis to verify the validity of our theoretical findings.

We initiate the study of repeated game dynamics in the population model, in which we are given a population of $n$ nodes, each with its local strategy, which interact uniformly at random by playing multi-round, two-player games. After each game, the two participants receive rewards according to a given payoff matrix, and may update their local strategies depending on this outcome. In this setting, we ask how the distribution of player strategies evolves with respect to the number of node interactions (time complexity), as well as the number of possible player states (space complexity), determining the stationary properties of such game dynamics. Our main technical results analyze the behavior of a family of Repeated Prisoner's Dilemma dynamics in this model, for which we provide an exact characterization of the stationary distribution, and give bounds on convergence time and on the optimality gap of its expected rewards. Our results follow from a new connection between Repeated Prisoner's Dilemma dynamics in a population, and a class of high-dimensional, weighted Ehrenfest random walks, which we analyze for the first time. The results highlight non-trivial trade-offs between the state complexity of each node's strategy, the convergence of the process, and the expected average reward of nodes in the population. Our approach opens the door towards the characterization of other natural evolutionary game dynamics in the population model.

Density-functional theory (DFT) has revolutionized computer simulations in chemistry and material science. A faithful implementation of the theory requires self-consistent calculations. However, this effort involves repeatedly diagonalizing the Hamiltonian, for which a classical algorithm typically requires a computational complexity that scales cubically with respect to the number of electrons. This limits DFT's applicability to large-scale problems with complex chemical environments and microstructures. This article presents a quantum algorithm that has a linear scaling with respect to the number of atoms, which is much smaller than the number of electrons. Our algorithm leverages the quantum singular value transformation (QSVT) to generate a quantum circuit to encode the density-matrix, and an estimation method for computing the output electron density. In addition, we present a randomized block coordinate fixed-point method to accelerate the self-consistent field calculations by reducing the number of components of the electron density that needs to be estimated. The proposed framework is accompanied by a rigorous error analysis that quantifies the function approximation error, the statistical fluctuation, and the iteration complexity. In particular, the analysis of our self-consistent iterations takes into account the measurement noise from the quantum circuit. These advancements offer a promising avenue for tackling large-scale DFT problems, enabling simulations of complex systems that were previously computationally infeasible.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

北京阿比特科技有限公司