亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this work we revisit the Boolean Hidden Matching communication problem, which was the first communication problem in the one-way model to demonstrate an exponential classical-quantum communication separation. In this problem, Alice's bits are matched into pairs according to a partition that Bob holds. These pairs are compressed using a Parity function and it is promised that the final bit-string is equal either to another bit-string Bob holds, or its complement. The problem is to decide which case is the correct one. Here we generalize the Boolean Hidden Matching problem by replacing the parity function with an arbitrary Boolean function $f$. Efficient communication protocols are presented depending on the sign-degree of $f$. If its sign-degree is less than or equal to 1, we show an efficient classical protocol. If its sign-degree is less than or equal to $2$, we show an efficient quantum protocol. We then completely characterize the classical hardness of all symmetric functions $f$ of sign-degree greater than or equal to $2$, except for one family of specific cases. We also prove, via Fourier analysis, a classical lower bound for any function $f$ whose pure high degree is greater than or equal to $2$. Similarly, we prove, also via Fourier analysis, a quantum lower bound for any function $f$ whose pure high degree is greater than or equal to $3$. These results give a large family of new exponential classical-quantum communication separations.

相關內容

The sparsely-activated models have achieved great success in natural language processing through large-scale parameters and relatively low computational cost, and gradually become a feasible technique for training and implementing extremely large models. Due to the limit of communication cost, activating multiple experts is hardly affordable during training and inference. Therefore, previous work usually activate just one expert at a time to alleviate additional communication cost. Such routing mechanism limits the upper bound of model performance. In this paper, we first investigate a phenomenon that increasing the number of activated experts can boost the model performance with higher sparse ratio. To increase the number of activated experts without an increase in computational cost, we propose SAM (Switch and Mixture) routing, an efficient hierarchical routing mechanism that activates multiple experts in a same device (GPU). Our methods shed light on the training of extremely large sparse models and experiments prove that our models can achieve significant performance gain with great efficiency improvement.

In this work we consider a class of non-linear eigenvalue problems that admit a spectrum similar to that of a Hamiltonian matrix, in the sense that the spectrum is symmetric with respect to both the real and imaginary axis. More precisely, we present a method to iteratively approximate the eigenvalues of such non-linear eigenvalue problems closest to a given purely real or imaginary shift, while preserving the symmetries of the spectrum. To this end the presented method exploits the equivalence between the considered non-linear eigenvalue problem and the eigenvalue problem associated with a linear but infinite-dimensional operator. To compute the eigenvalues closest to the given shift, we apply a specifically chosen shift-invert transformation to this linear operator and compute the eigenvalues with the largest modulus of the new shifted and inverted operator using an (infinite) Arnoldi procedure. The advantage of the chosen shift-invert transformation is that the spectrum of the transformed operator has a "real skew-Hamiltonian"-like structure. Furthermore, it is proven that the Krylov space constructed by applying this operator, satisfies an orthogonality property in terms of a specifically chosen bilinear form. By taking this property into account in the orthogonalization process, it is ensured that even in the presence of rounding errors, the obtained approximation for, e.g., a simple, purely imaginary eigenvalue is simple and purely imaginary. The presented work can thus be seen as an extension of [V. Mehrmann and D. Watkins, "Structure-Preserving Methods for Computing Eigenpairs of Large Sparse Skew-Hamiltonian/Hamiltonian Pencils", SIAM J. Sci. Comput. (22.6), 2001], to the considered class of non-linear eigenvalue problems. Although the presented method is initially defined on function spaces, it can be implemented using finite dimensional linear algebra operations.

Gaussian boson sampling is a model of photonic quantum computing that has attracted attention as a platform for building quantum devices capable of performing tasks that are out of reach for classical devices. There is therefore significant interest, from the perspective of computational complexity theory, in solidifying the mathematical foundation for the hardness of simulating these devices. We show that, under the standard Anti-Concentration and Permanent-of-Gaussians conjectures, there is no efficient classical algorithm to sample from ideal Gaussian boson sampling distributions (even approximately) unless the polynomial hierarchy collapses. The hardness proof holds in the regime where the number of modes scales quadratically with the number of photons, a setting in which hardness was widely believed to hold but that nevertheless had no definitive proof. Crucial to the proof is a new method for programming a Gaussian boson sampling device so that the output probabilities are proportional to the permanents of submatrices of an arbitrary matrix. This technique is a generalization of Scattershot BosonSampling that we call BipartiteGBS. We also make progress towards the goal of proving hardness in the regime where there are fewer than quadratically more modes than photons (i.e., the high-collision regime) by showing that the ability to approximate permanents of matrices with repeated rows/columns confers the ability to approximate permanents of matrices with no repetitions. The reduction suffices to prove that GBS is hard in the constant-collision regime.

Given a graph $G=(V,E)$ and an integer $k$, the Minimum Membership Dominating Set (MMDS) problem seeks to find a dominating set $S \subseteq V$ of $G$ such that for each $v \in V$, $|N[v] \cap S|$ is at most $k$. We investigate the parameterized complexity of the problem and obtain the following results about MMDS: W[1]-hardness of the problem parameterized by the pathwidth (and thus, treewidth) of the input graph. W[1]-hardness parameterized by $k$ on split graphs. An algorithm running in time $2^{\mathcal{O}(\textbf{vc})} |V|^{\mathcal{O}(1)}$, where $\textbf{vc}$ is the size of a minimum-sized vertex cover of the input graph. An ETH-based lower bound showing that the algorithm mentioned in the previous item is optimal.

This paper studies the $\tau$-coherence of a (n x p)-observation matrix in a Gaussian framework. The $\tau$-coherence is defined as the largest magnitude outside a diagonal bandwith of size $\tau$ of the empirical correlation coefficients associated to our observations. Using the Chen-Stein method we derive the limiting law of the normalized coherence and show the convergence towards a Gumbel distribution. We generalize here the results of Cai and Jiang [CJ11a]. We assume that the covariance matrix of the model is bandwise. Moreover, we provide numerical considerations highlighting issues from the high dimension hypotheses. We numerically illustrate the asymptotic behaviour of the coherence with Monte-Carlo experiment using a HPC splitting strategy for high dimensional correlation matrices.

We investigate variational principles for the approximation of quantum dynamics that apply for approximation manifolds that do not have complex linear tangent spaces. The first one, dating back to McLachlan (1964) minimizes the residuum of the time-dependent Schr\"odinger equation, while the second one, originating from the lecture notes of Kramer--Saraceno (1981), imposes the stationarity of an action functional. We characterize both principles in terms of metric and a symplectic orthogonality conditions, consider their conservation properties, and derive an elementary a-posteriori error estimate. As an application, we revisit the time-dependent Hartree approximation and frozen Gaussian wave packets.

The (unweighted) tree edit distance problem for $n$ node trees asks to compute a measure of dissimilarity between two rooted trees with node labels. The current best algorithm from more than a decade ago runs in $O(n ^ 3)$ time [Demaine, Mozes, Rossman, and Weimann, ICALP 2007]. The same paper also showed that $O(n ^ 3)$ is the best possible running time for any algorithm using the so-called decomposition strategy, which underlies almost all the known algorithms for this problem. These algorithms would also work for the weighted tree edit distance problem, which cannot be solved in truly sub-cubic time under the APSP conjecture [Bringmann, Gawrychowski, Mozes, and Weimann, SODA 2018]. In this paper, we break the cubic barrier by showing an $O(n ^ {2.9546})$ time algorithm for the unweighted tree edit distance problem. We consider an equivalent maximization problem and use a dynamic programming scheme involving matrices with many special properties. By using a decomposition scheme as well as several combinatorial techniques, we reduce tree edit distance to the max-plus product of bounded-difference matrices, which can be solved in truly sub-cubic time [Bringmann, Grandoni, Saha, and Vassilevska Williams, FOCS 2016].

This work explores the development and the analysis of an efficient reduced order model for the study of a bifurcating phenomenon, known as the Coand\u{a} effect, in a multi-physics setting involving fluid and solid media. Taking into consideration a Fluid-Structure Interaction problem, we aim at generalizing previous works towards a more reliable description of the physics involved. In particular, we provide several insights on how the introduction of an elastic structure influences the bifurcating behaviour. We have addressed the computational burden by developing a reduced order branch-wise algorithm based on a monolithic Proper Orthogonal Decomposition. We compared different constitutive relations for the solid, and we observed that a nonlinear hyper-elastic law delays the bifurcation w.r.t. the standard model, while the same effect is even magnified when considering linear elastic solid.

Approximate linear programs (ALPs) are well-known models based on value function approximations (VFAs) to obtain policies and lower bounds on the optimal policy cost of discounted-cost Markov decision processes (MDPs). Formulating an ALP requires (i) basis functions, the linear combination of which defines the VFA, and (ii) a state-relevance distribution, which determines the relative importance of different states in the ALP objective for the purpose of minimizing VFA error. Both these choices are typically heuristic: basis function selection relies on domain knowledge while the state-relevance distribution is specified using the frequency of states visited by a heuristic policy. We propose a self-guided sequence of ALPs that embeds random basis functions obtained via inexpensive sampling and uses the known VFA from the previous iteration to guide VFA computation in the current iteration. Self-guided ALPs mitigate the need for domain knowledge during basis function selection as well as the impact of the initial choice of the state-relevance distribution, thus significantly reducing the ALP implementation burden. We establish high probability error bounds on the VFAs from this sequence and show that a worst-case measure of policy performance is improved. We find that these favorable implementation and theoretical properties translate to encouraging numerical results on perishable inventory control and options pricing applications, where self-guided ALP policies improve upon policies from problem-specific methods. More broadly, our research takes a meaningful step toward application-agnostic policies and bounds for MDPs.

Continuous-time quantum walks have proven to be an extremely useful framework for the design of several quantum algorithms. Often, the running time of quantum algorithms in this framework is characterized by the quantum hitting time: the time required by the quantum walk to find a vertex of interest with a high probability. In this article, we provide improved upper bounds for the quantum hitting time that can be applied to several CTQW-based quantum algorithms. In particular, we apply our techniques to the glued-trees problem, improving their hitting time upper bound by a polynomial factor: from $O(n^5)$ to $O(n^2\log n)$. Furthermore, our methods also help to exponentially improve the dependence on precision of the continuous-time quantum walk based algorithm to find a marked node on any ergodic, reversible Markov chain by Chakraborty et al. [PRA 102, 022227 (2020)].

北京阿比特科技有限公司