亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Black holes are believed to be fast scramblers of information, as they rapidly destroy local correlations and spread information throughout the system. Unscrambling this information is in principle possible, given perfect knowledge of the black hole internal dynamics[arXiv:1710.03363]. This work shows that even if one doesn't know the internal dynamics of the black hole, information can be efficiently decoded from an unknown black hole by observing the outgoing Hawking radiation. We show that, surprisingly, black holes with an unknown internal dynamics that are rapidly scrambling but not fully chaotic admit Clifford decoders: the salient properties of a scrambling unitary can be efficiently recovered even if exponentially complex. This recovery is possible because all the redundant complexity can be described as an entropy, the stabilizer entropy. We show how for non-chaotic black holes the stabilizer entropy can be efficiently pumped away, just as in a refrigerator.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · Tensor · Networking · Eagle · SimPLe ·
2023 年 9 月 27 日

We show how quantum-inspired 2d tensor networks can be used to efficiently and accurately simulate the largest quantum processors from IBM, namely Eagle (127 qubits), Osprey (433 qubits) and Condor (1121 qubits). We simulate the dynamics of a complex quantum many-body system -- specifically, the kicked Ising experiment considered recently by IBM in Nature 618, p. 500-505 (2023) -- using graph-based Projected Entangled Pair States (gPEPS), which was proposed by some of us in PRB 99, 195105 (2019). Our results show that simple tensor updates are already sufficient to achieve very large unprecedented accuracy with remarkably low computational resources for this model. Apart from simulating the original experiment for 127 qubits, we also extend our results to 433 and 1121 qubits, thus setting a benchmark for the newest IBM quantum machines. We also report accurate simulations for infinitely-many qubits. Our results show that gPEPS are a natural tool to efficiently simulate quantum computers with an underlying lattice-based qubit connectivity, such as all quantum processors based on superconducting qubits.

We study the multivariate deconvolution problem of recovering the distribution of a signal from independent and identically distributed observations additively contaminated with random errors (noise) from a known distribution. For errors with independent coordinates having ordinary smooth densities, we derive an inversion inequality relating the $L^1$-Wasserstein distance between two distributions of the signal to the $L^1$-distance between the corresponding mixture densities of the observations. This smoothing inequality outperforms existing inversion inequalities. As an application of the inversion inequality to the Bayesian framework, we consider $1$-Wasserstein deconvolution with Laplace noise in dimension one using a Dirichlet process mixture of normal densities as a prior measure on the mixing distribution (or distribution of the signal). We construct an adaptive approximation of the sampling density by convolving the Laplace density with a well-chosen mixture of normal densities and show that the posterior measure concentrates around the sampling density at a nearly minimax rate, up to a log-factor, in the $L^1$-distance. The same posterior law is also shown to automatically adapt to the unknown Sobolev regularity of the mixing density, thus leading to a new Bayesian adaptive estimation procedure for mixing distributions with regular densities under the $L^1$-Wasserstein metric. We illustrate utility of the inversion inequality also in a frequentist setting by showing that an appropriate isotone approximation of the classical kernel deconvolution estimator attains the minimax rate of convergence for $1$-Wasserstein deconvolution in any dimension $d\geq 1$, when only a tail condition is required on the latent mixing density and we derive sharp lower bounds for these problems

Citation metrics are the best tools for research assessments. However, current metrics may be misleading in research systems that pursue simultaneously different goals, such as the advance of science and incremental innovations, because their publications have different citation distributions. We estimate the contribution to the progress of knowledge by studying only a limited number of the most cited papers, which are dominated by publications pursuing this progress. To field-normalize the metrics, we substitute the number of citations by the rank position of papers from one country in the global list of papers. Using synthetic series of lognormally distributed numbers, we developed the Rk-index, which is calculated from the global ranks of the 10 highest numbers in each series, and demonstrate its equivalence to the number of papers in top percentiles, P_top0.1% and P_top0.01% . In real cases, the Rk-index is simple and easy to calculate, and evaluates the contribution to the progress of knowledge better than less stringent metrics. Although further research is needed, rank analysis of the most cited papers is a promising approach for research evaluation. It is also demonstrated that, for this purpose, domestic and collaborative papers should be studied independently.

We consider the problem of estimating the marginal independence structure of a Bayesian network from observational data in the form of an undirected graph called the unconditional dependence graph. We show that unconditional dependence graphs of Bayesian networks correspond to the graphs having equal independence and intersection numbers. Using this observation, a Gr\"obner basis for a toric ideal associated to unconditional dependence graphs of Bayesian networks is given and then extended by additional binomial relations to connect the space of all such graphs. An MCMC method, called GrUES (Gr\"obner-based Unconditional Equivalence Search), is implemented based on the resulting moves and applied to synthetic Gaussian data. GrUES recovers the true marginal independence structure via a penalized maximum likelihood or MAP estimate at a higher rate than simple independence tests while also yielding an estimate of the posterior, for which the $20\%$ HPD credible sets include the true structure at a high rate for data-generating graphs with density at least $0.5$.

We present a method for computing nearly singular integrals that occur when single or double layer surface integrals, for harmonic potentials or Stokes flow, are evaluated at points nearby. Such values could be needed in solving an integral equation when one surface is close to another or to obtain values at grid points. We replace the singular kernel with a regularized version having a length parameter $\delta$ in order to control discretization error. Analysis near the singularity leads to an expression for the error due to regularization which has terms with unknown coefficients multiplying known quantities. By computing the integral with three choices of $\delta$ we can solve for an extrapolated value that has regularization error reduced to $O(\delta^5)$. In examples with $\delta/h$ constant and moderate resolution we observe total error about $O(h^5)$. For convergence as $h \to 0$ we can choose $\delta$ proportional to $h^q$ with $q < 1$ to ensure the discretization error is dominated by the regularization error. With $q = 4/5$ we find errors about $O(h^4)$. For harmonic potentials we extend the approach to a version with $O(\delta^7)$ regularization; it typically has smaller errors but the order of accuracy is less predictable.

Validation metrics are key for the reliable tracking of scientific progress and for bridging the current chasm between artificial intelligence (AI) research and its translation into practice. However, increasing evidence shows that particularly in image analysis, metrics are often chosen inadequately in relation to the underlying research problem. This could be attributed to a lack of accessibility of metric-related knowledge: While taking into account the individual strengths, weaknesses, and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multi-stage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides the first reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Focusing on biomedical image analysis but with the potential of transfer to other fields, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. To facilitate comprehension, illustrations and specific examples accompany each pitfall. As a structured body of information accessible to researchers of all levels of expertise, this work enhances global comprehension of a key topic in image analysis validation.

We investigate the combinatorics of max-pooling layers, which are functions that downsample input arrays by taking the maximum over shifted windows of input coordinates, and which are commonly used in convolutional neural networks. We obtain results on the number of linearity regions of these functions by equivalently counting the number of vertices of certain Minkowski sums of simplices. We characterize the faces of such polytopes and obtain generating functions and closed formulas for the number of vertices and facets in a 1D max-pooling layer depending on the size of the pooling windows and stride, and for the number of vertices in a special case of 2D max-pooling.

Selfadhesivity is a property of entropic polymatroids which guarantees that the polymatroid can be glued to an identical copy of itself along arbitrary restrictions such that the two pieces are independent given the common restriction. We show that positive definite matrices satisfy this condition as well and examine consequences for Gaussian conditional independence structures. New axioms of Gaussian CI are obtained by applying selfadhesivity to the previously known axioms of structural semigraphoids and orientable gaussoids.

Dye experimentation is a widely used method in experimental fluid mechanics for flow analysis or for the study of the transport of particles within a fluid. This technique is particularly useful in biomedical diagnostic applications ranging from hemodynamic analysis of cardiovascular systems to ocular circulation. However, simulating dyes governed by convection-diffusion partial differential equations (PDEs) can also be a useful post-processing analysis approach for computational fluid dynamics (CFD) applications. Such simulations can be used to identify the relative significance of different spatial subregions in particular time intervals of interest in an unsteady flow field. Additionally, dye evolution is closely related to non-discrete particle residence time (PRT) calculations that are governed by similar PDEs. This contribution introduces a pseudo-spectral method based on Fourier continuation (FC) for conducting dye simulations and non-discrete particle residence time calculations without numerical diffusion errors. Convergence and error analyses are performed with both manufactured and analytical solutions. The methodology is applied to three distinct physical/physiological cases: 1) flow over a two-dimensional (2D) cavity; 2) pulsatile flow in a simplified partially-grafted aortic dissection model; and 3) non-Newtonian blood flow in a Fontan graft. Although velocity data is provided in this work by numerical simulation, the proposed approach can also be applied to velocity data collected through experimental techniques such as from particle image velocimetry.

We propose an approach to compute inner and outer-approximations of the sets of values satisfying constraints expressed as arbitrarily quantified formulas. Such formulas arise for instance when specifying important problems in control such as robustness, motion planning or controllers comparison. We propose an interval-based method which allows for tractable but tight approximations. We demonstrate its applicability through a series of examples and benchmarks using a prototype implementation.

北京阿比特科技有限公司