亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The single-letter characterization of the entanglement-assisted capacity of a quantum channel is one of the seminal results of quantum information theory. In this paper, we consider a modified communication scenario in which the receiver is allowed an additional, `inconclusive' measurement outcome, and we employ an error metric given by the error probability in decoding the transmitted message conditioned on a conclusive measurement result. We call this setting postselected communication and the ensuing highest achievable rates the postselected capacities. Here, we provide a precise single-letter characterisation of postselected capacities in the setting of entanglement assistance as well as the more general non-signalling assistance, establishing that they are both equal to the channel's projective mutual information -- a variant of mutual information based on the Hilbert projective metric. We do so by establishing bounds on the one-shot postselected capacities, with a lower bound that makes use of a postselected teleportation protocol and an upper bound in terms of the postselected hypothesis testing relative entropy. As such, we obtain fundamental limits on a channel's ability to communicate even when this strong resource of postselection is allowed, implying limitations on communication even when the receiver has access to postselected closed timelike curves.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · Extensibility · MoDELS · 估計/估計量 · Analysis ·
2023 年 9 月 26 日

The single-particle cryo-EM field faces the persistent challenge of preferred orientation, lacking general computational solutions. We introduce cryoPROS, an AI-based approach designed to address the above issue. By generating the auxiliary particles with a conditional deep generative model, cryoPROS addresses the intrinsic bias in orientation estimation for the observed particles. We effectively employed cryoPROS in the cryo-EM single particle analysis of the hemagglutinin trimer, showing the ability to restore the near-atomic resolution structure on non-tilt data. Moreover, the enhanced version named cryoPROS-MP significantly improves the resolution of the membrane protein NaX using the no-tilted data that contains the effects of micelles. Compared to the classical approaches, cryoPROS does not need special experimental or image acquisition techniques, providing a purely computational yet effective solution for the preferred orientation problem. Finally, we conduct extensive experiments that establish the low risk of model bias and the high robustness of cryoPROS.

This note presents a refined local approximation for the logarithm of the ratio between the negative multinomial probability mass function and a multivariate normal density, both having the same mean-covariance structure. This approximation, which is derived using Stirling's formula and a meticulous treatment of Taylor expansions, yields an upper bound on the Hellinger distance between the jittered negative multinomial distribution and the corresponding multivariate normal distribution. Upper bounds on the Le Cam distance between negative multinomial and multivariate normal experiments ensue.

The main topic of this paper are algorithms for computing Nash equilibria. We cast our particular methods as instances of a general algorithmic abstraction, namely, a method we call {\em algorithmic boosting}, which is also relevant to other fixed-point computation problems. Algorithmic boosting is the principle of computing fixed points by taking (long-run) averages of iterated maps and it is a generalization of exponentiation. We first define our method in the setting of nonlinear maps. Secondly, we restrict attention to convergent linear maps (for computing dominant eigenvectors, for example, in the PageRank algorithm) and show that our algorithmic boosting method can set in motion {\em exponential speedups in the convergence rate}. Thirdly, we show that algorithmic boosting can convert a (weak) non-convergent iterator to a (strong) convergent one. We also consider a {\em variational approach} to algorithmic boosting providing tools to convert a non-convergent continuous flow to a convergent one. Then, by embedding the construction of averages in the design of the iterated map, we constructively prove the existence of Nash equilibria (and, therefore, Brouwer fixed points). We then discuss implementations of averaging and exponentiation, an important matter even for the scalar case. We finally discuss a relationship between dominant (PageRank) eigenvectors and Nash equilibria.

A linear-time algorithm for generating auxiliary subgraphs for the 3-edge-connected components of a connected multigraph is presented. The algorithm uses an innovative graph contraction operation and makes only one pass over the graph. By contrast, the previously best-known algorithms make multiple passes over the graph to decompose it into its 2-edge-connected components or 2-vertex-connected components, then its 3-edge-connected components or 3-vertex-connected components, and then construct a cactus representation for the 2-cuts to generate the auxiliary subgraphs for the 3-edge-connected components.

In this paper we revisit the classical Cauchy problem for Laplace's equation as well as two further related problems in the light of regularisation of this highly ill-conditioned problem by replacing integer derivatives with fractional ones. We do so in the spirit of quasi reversibility, replacing a classically severely ill-posed PDE problem by a nearby well-posed or only mildly ill-posed one. In order to be able to make use of the known stabilising effect of one-dimensional fractional derivatives of Abel type we work in a particular rectangular (in higher space dimensions cylindrical) geometry. We start with the plain Cauchy problem of reconstructing the values of a harmonic function inside this domain from its Dirichlet and Neumann trace on part of the boundary (the cylinder base) and explore three options for doing this with fractional operators. The two other related problems are the recovery of a free boundary and then this together with simultaneous recovery of the impedance function in the boundary condition. Our main technique here will be Newton's method. The paper contains numerical reconstructions and convergence results for the devised methods.

This survey explores modern approaches for computing low-rank approximations of high-dimensional matrices by means of the randomized SVD, randomized subspace iteration, and randomized block Krylov iteration. The paper compares the procedures via theoretical analyses and numerical studies to highlight how the best choice of algorithm depends on spectral properties of the matrix and the computational resources available. Despite superior performance for many problems, randomized block Krylov iteration has not been widely adopted in computational science. The paper strengthens the case for this method in three ways. First, it presents new pseudocode that can significantly reduce computational costs. Second, it provides a new analysis that yields simple, precise, and informative error bounds. Last, it showcases applications to challenging scientific problems, including principal component analysis for genetic data and spectral clustering for molecular dynamics data.

We propose an approach to compute inner and outer-approximations of the sets of values satisfying constraints expressed as arbitrarily quantified formulas. Such formulas arise for instance when specifying important problems in control such as robustness, motion planning or controllers comparison. We propose an interval-based method which allows for tractable but tight approximations. We demonstrate its applicability through a series of examples and benchmarks using a prototype implementation.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

While deep learning strategies achieve outstanding results in computer vision tasks, one issue remains. The current strategies rely heavily on a huge amount of labeled data. In many real-world problems it is not feasible to create such an amount of labeled training data. Therefore, researchers try to incorporate unlabeled data into the training process to reach equal results with fewer labels. Due to a lot of concurrent research, it is difficult to keep track of recent developments. In this survey we provide an overview of often used techniques and methods in image classification with fewer labels. We compare 21 methods. In our analysis we identify three major trends. 1. State-of-the-art methods are scaleable to real world applications based on their accuracy. 2. The degree of supervision which is needed to achieve comparable results to the usage of all labels is decreasing. 3. All methods share common techniques while only few methods combine these techniques to achieve better performance. Based on all of these three trends we discover future research opportunities.

北京阿比特科技有限公司