亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this work, we consider the task of faithfully simulating a quantum measurement, acting on a joint bipartite quantum state, in a distributed manner. In the distributed setup, the constituent sub-systems of the joint quantum state are measured by two agents, Alice and Bob. A third agent, Charlie receives the measurement outcomes sent by Alice and Bob. Charlie uses local and pairwise shared randomness to compute a bivariate function of the measurement outcomes. The objective of three agents is to faithfully simulate the given distributed quantum measurement acting on the given quantum state while minimizing the communication and shared randomness rates. We demonstrate a new achievable information-theoretic rate-region that exploits the bivariate function using random structured POVMs based on asymptotically good algebraic codes. The algebraic structure of these codes is matched to that of the bivariate function that models the action of Charlie. The conventional approach for this class of problems has been to reconstruct individual measurement outcomes corresponding to Alice and Bob, at Charlie, and then compute the bivariate function, achieved using mutually independent approximating POVMs based on random unstructured codes. In the present approach, using algebraic structured POVMs, the computation is performed on the fly, thus obviating the need to reconstruct individual measurement outcomes at Charlie. Using this, we show that a strictly larger rate region can be achieved. One of the challenges in analyzing these structured POVMs is that they exhibit only pairwise independence and induce only uniform single-letter distributions. To address this, we use nesting of algebraic codes and develop a covering lemma applicable to pairwise-independent POVM ensembles. Combining these techniques, we provide a multi-party distributed faithful simulation and function computation protocol.

相關內容

The Quantum Reverse Shannon Theorem has been a milestone in quantum information theory. It states that asymptotically reliable simulation of a quantum channel assisted by unlimited shared entanglement is possible, if and only if, the classical communication cost is greater than or equal to the channel's entanglement-assisted capacity. In this letter, we are concerned with the performance of reliable reverse Shannon simulation of quantum channels. Our main result is an in-depth characterization of the reliability function, that is, the optimal rate under which the performance of channel simulation asymptotically approaches the perfect. In particular, we have determined the exact formula of the reliability function when the classical communication cost is not too high -- below a critical value. In the derivation, we have also obtained an achievability bound for the simulation of finite many copies of the channel, which is of realistic significance.

Light propagation through diffusive media can be described by the diffusion equation in a space-time domain. Further, fluorescence can be described by a system of coupled diffusion equations. This paper analyzes time-domain measurements, which measure the temporal point-spread function (TPSF), at a boundary of such diffusive media with a given source and detector. We focus on the temporal position of the TPSF maximum, which we refer to as the peak time. Although some unique properties of solutions of this system have been numerically studied, we give a mathematical analysis of peak time, providing proof of the existence, uniqueness, and the explicit expression of the peak time. We clearly show the relationship between the peak time and the object position in a medium.

This work considers the problem of learning the Markov parameters of a linear system from observed data. Recent non-asymptotic system identification results have characterized the sample complexity of this problem in the single and multi-rollout setting. In both instances, the number of samples required in order to obtain acceptable estimates can produce optimization problems with an intractably large number of decision variables for a second-order algorithm. We show that a randomized and distributed Newton algorithm based on Hessian-sketching can produce $\epsilon$-optimal solutions and converges geometrically. Moreover, the algorithm is trivially parallelizable. Our results hold for a variety of sketching matrices and we illustrate the theory with numerical examples.

A phase-type distribution is the distribution of the time until absorption in a finite state-space time-homogeneous Markov jump process, with one absorbing state and the rest being transient. These distributions are mathematically tractable and conceptually attractive to model physical phenomena due to their interpretation in terms of a hidden Markov structure. Three recent extensions of regular phase-type distributions give rise to models which allow for heavy tails: discrete- or continuous-scaling; fractional-time semi-Markov extensions; and inhomogeneous time-change of the underlying Markov process. In this paper, we present a unifying theory for heavy-tailed phase-type distributions for which all three approaches are particular cases. Our main objective is to provide useful models for heavy-tailed phase-type distributions, but any other tail behavior is also captured by our specification. We provide relevant new examples and also show how existing approaches are naturally embedded. Subsequently, two multivariate extensions are presented, inspired by the univariate construction which can be considered as a matrix version of a frailty model. We provide fully explicit EM-algorithms for all models and illustrate them using synthetic and real-life data.

This paper introduces the proposed automatic minuting system of the Hitachi team for the First Shared Task on Automatic Minuting (AutoMin-2021). We utilize a reference-free approach (i.e., without using training minutes) for automatic minuting (Task A), which first splits a transcript into blocks on the basis of topics and subsequently summarizes those blocks with a pre-trained BART model fine-tuned on a summarization corpus of chat dialogue. In addition, we apply a technique of argument mining to the generated minutes, reorganizing them in a well-structured and coherent way. We utilize multiple relevance scores to determine whether or not a minute is derived from the same meeting when either a transcript or another minute is given (Task B and C). On top of those scores, we train a conventional machine learning model to bind them and to make final decisions. Consequently, our approach for Task A achieve the best adequacy score among all submissions and close performance to the best system in terms of grammatical correctness and fluency. For Task B and C, the proposed model successfully outperformed a majority vote baseline.

Gaussian covariance graph model is a popular model in revealing underlying dependency structures among random variables. A Bayesian approach to the estimation of covariance structures uses priors that force zeros on some off-diagonal entries of covariance matrices and put a positive definite constraint on matrices. In this paper, we consider a spike and slab prior on off-diagonal entries, which uses a mixture of point-mass and normal distribution. The point-mass naturally introduces sparsity to covariance structures so that the resulting posterior from this prior renders covariance structure learning. Under this prior, we calculate posterior model probabilities of covariance structures using Laplace approximation. We show that the error due to Laplace approximation becomes asymptotically marginal at some rate depending on the posterior convergence rate of covariance matrix under the Frobenius norm. With the approximated posterior model probabilities, we propose a new framework for estimating a covariance structure. Since the Laplace approximation is done around the mode of conditional posterior of covariance matrix, which cannot be obtained in the closed form, we propose a block coordinate descent algorithm to find the mode and show that the covariance matrix can be estimated using this algorithm once the structure is chosen. Through a simulation study based on five numerical models, we show that the proposed method outperforms graphical lasso and sample covariance matrix in terms of root mean squared error, max norm, spectral norm, specificity, and sensitivity. Also, the advantage of the proposed method is demonstrated in terms of accuracy compared to our competitors when it is applied to linear discriminant analysis (LDA) classification to breast cancer diagnostic dataset.

Homology features of spaces which appear in many applications, for instance 3D meshes, are among the most important topological properties of these objects. Given a non-trivial cycle in a homology class, we consider the problem of computing a representative in that homology class which is optimal. We study two measures of optimality, namely, the lexicographic order of cycles (the lex-optimal cycle) and the bottleneck norm (a bottleneck-optimal cycle). We give a simple algorithm for computing the lex-optimal cycle for a 1-homology class in a surface. In contrast to this, our main result is that, in the case of 3-manifolds of size $n^2$ in the Euclidean 3-space, the problem of finding a bottleneck optimal cycle cannot be solved more efficiently than solving a system of linear equations with a $n \times n$ sparse matrix. From this reduction, we deduce several hardness results. Most notably, we show that for 3-manifolds given as a subset of the 3-space (of size $n^2$), persistent homology computations are at least as hard as matrix multiplication while ordinary homology computations can be done in $O(n^2 \log n)$ time. This is the first such distinction between these two computations. Moreover, it follows that the same disparity exists between the height persistent homology computation and general sub-level set persistent homology computation for simplicial complexes in the 3-space.

Methods that align distributions by minimizing an adversarial distance between them have recently achieved impressive results. However, these approaches are difficult to optimize with gradient descent and they often do not converge well without careful hyperparameter tuning and proper initialization. We investigate whether turning the adversarial min-max problem into an optimization problem by replacing the maximization part with its dual improves the quality of the resulting alignment and explore its connections to Maximum Mean Discrepancy. Our empirical results suggest that using the dual formulation for the restricted family of linear discriminators results in a more stable convergence to a desirable solution when compared with the performance of a primal min-max GAN-like objective and an MMD objective under the same restrictions. We test our hypothesis on the problem of aligning two synthetic point clouds on a plane and on a real-image domain adaptation problem on digits. In both cases, the dual formulation yields an iterative procedure that gives more stable and monotonic improvement over time.

Discrete random structures are important tools in Bayesian nonparametrics and the resulting models have proven effective in density estimation, clustering, topic modeling and prediction, among others. In this paper, we consider nested processes and study the dependence structures they induce. Dependence ranges between homogeneity, corresponding to full exchangeability, and maximum heterogeneity, corresponding to (unconditional) independence across samples. The popular nested Dirichlet process is shown to degenerate to the fully exchangeable case when there are ties across samples at the observed or latent level. To overcome this drawback, inherent to nesting general discrete random measures, we introduce a novel class of latent nested processes. These are obtained by adding common and group-specific completely random measures and, then, normalising to yield dependent random probability measures. We provide results on the partition distributions induced by latent nested processes, and develop an Markov Chain Monte Carlo sampler for Bayesian inferences. A test for distributional homogeneity across groups is obtained as a by product. The results and their inferential implications are showcased on synthetic and real data.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司