亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The {\em Spanning Tree Congestion} problem is an easy-to-state NP-hard problem: given a graph $G$, construct a spanning tree $T$ of $G$ minimizing its maximum edge congestion where the congestion of an edge $e\in T$ is the number of edges $uv$ in $G$ such that the unique path between $u$ and $v$ in $T$ passes through $e$; the optimum value for a given graph $G$ is denoted $STC(G)$. It is known that {\em every} spanning tree is an $n/2$-approximation. A long-standing problem is to design a better approximation algorithm. Our contribution towards this goal is an $O(\Delta\cdot\log^{3/2}n)$-approximation algorithm for the minimum congestion spanning tree problem where $\Delta$ is the maximum degree in $G$. For graphs with maximum degree bounded by polylog of the number of vertices, this is an exponential improvement over the previous best approximation. For graphs with maximum degree bounded by $o(n/\log^{3/2}n)$, we get $o(n)$-approximation; this is the largest class of graphs that we know of, for which sublinear approximation is known for this problem. Our main tool for the algorithm is a new lower bound on the spanning tree congestion which is of independent interest. We prove that for every graph $G$, $STC(G)\geq \Omega(hb(G)/\Delta)$ where $hb(G)$ denotes the maximum bisection width over all subgraphs of $G$.

相關內容

Optimization over the set of matrices $X$ that satisfy $X^\top B X = I_p$, referred to as the generalized Stiefel manifold, appears in many applications involving sampled covariance matrices such as the canonical correlation analysis (CCA), independent component analysis (ICA), and the generalized eigenvalue problem (GEVP). Solving these problems is typically done by iterative methods that require a fully formed $B$. We propose a cheap stochastic iterative method that solves the optimization problem while having access only to random estimates of $B$. Our method does not enforce the constraint in every iteration; instead, it produces iterations that converge to critical points on the generalized Stiefel manifold defined in expectation. The method has lower per-iteration cost, requires only matrix multiplications, and has the same convergence rates as its Riemannian optimization counterparts that require the full matrix $B$. Experiments demonstrate its effectiveness in various machine learning applications involving generalized orthogonality constraints, including CCA, ICA, and the GEVP.

We give a comprehensive description of Wasserstein gradient flows of maximum mean discrepancy (MMD) functionals $\mathcal F_\nu := \text{MMD}_K^2(\cdot, \nu)$ towards given target measures $\nu$ on the real line, where we focus on the negative distance kernel $K(x,y) := -|x-y|$. In one dimension, the Wasserstein-2 space can be isometrically embedded into the cone $\mathcal C(0,1) \subset L_2(0,1)$ of quantile functions leading to a characterization of Wasserstein gradient flows via the solution of an associated Cauchy problem on $L_2(0,1)$. Based on the construction of an appropriate counterpart of $\mathcal F_\nu$ on $L_2(0,1)$ and its subdifferential, we provide a solution of the Cauchy problem. For discrete target measures $\nu$, this results in a piecewise linear solution formula. We prove invariance and smoothing properties of the flow on subsets of $\mathcal C(0,1)$. For certain $\mathcal F_\nu$-flows this implies that initial point measures instantly become absolutely continuous, and stay so over time. Finally, we illustrate the behavior of the flow by various numerical examples using an implicit Euler scheme, which is easily computable by a bisection algorithm. For continuous targets $\nu$, also the explicit Euler scheme can be employed, although with limited convergence guarantees.

We study the query complexity of the metric Steiner Tree problem, where we are given an $n \times n$ metric on a set $V$ of vertices along with a set $T \subseteq V$ of $k$ terminals, and the goal is to find a tree of minimum cost that contains all terminals in $T$. The query complexity for the related minimum spanning tree (MST) problem is well-understood: for any fixed $\varepsilon > 0$, one can estimate the MST cost to within a $(1+\varepsilon)$-factor using only $\tilde{O}(n)$ queries, and this is known to be tight. This implies that a $(2 + \varepsilon)$-approximate estimate of Steiner Tree cost can be obtained with $\tilde{O}(k)$ queries by simply applying the MST cost estimation algorithm on the metric induced by the terminals. Our first result shows that any (randomized) algorithm that estimates the Steiner Tree cost to within a $(5/3 - \varepsilon)$-factor requires $\Omega(n^2)$ queries, even if $k$ is a constant. This lower bound is in sharp contrast to an upper bound of $O(nk)$ queries for computing a $(5/3)$-approximate Steiner Tree, which follows from previous work by Du and Zelikovsky. Our second main result, and the main technical contribution of this work, is a sublinear query algorithm for estimating the Steiner Tree cost to within a strictly better-than-$2$ factor, with query complexity $\tilde{O}(n^{12/7} + n^{6/7}\cdot k)=\tilde{O}(n^{13/7})=o(n^2)$. We complement this result by showing an $\tilde{\Omega}(n + k^{6/5})$ query lower bound for any algorithm that estimates Steiner Tree cost to a strictly better than $2$ factor. Thus $\tilde{\Omega}(n^{6/5})$ queries are needed to just beat $2$-approximation when $k = \Omega(n)$; a sharp contrast to MST cost estimation where a $(1+o(1))$-approximate estimate of cost is achievable with only $\tilde{O}(n)$ queries.

We study computational problems related to the Schr\"odinger operator $H = -\Delta + V$ in the real space under the condition that (i) the potential function $V$ is smooth and has its value and derivative bounded within some polynomial of $n$ and (ii) $V$ only consists of $O(1)$-body interactions. We prove that (i) simulating the dynamics generated by the Schr\"odinger operator implements universal quantum computation, i.e., it is BQP-hard, and (ii) estimating the ground energy of the Schr\"odinger operator is as hard as estimating that of local Hamiltonians with no sign problem (a.k.a. stoquastic Hamiltonians), i.e., it is StoqMA-complete. This result is particularly intriguing because the ground energy problem for general bosonic Hamiltonians is known to be QMA-hard and it is widely believed that $\texttt{StoqMA}\varsubsetneq \texttt{QMA}$.

The Euclidean algorithm is the oldest algorithms known to mankind. Given two integral numbers $a_1$ and $a_2$, it computes the greatest common divisor (gcd) of $a_1$ and $a_2$ in a very elegant way. From a lattice perspective, it computes a basis of the sum of two one-dimensional lattices $a_1 \mathbb{Z}$ and $a_2 \mathbb{Z}$ as $\gcd(a_1,a_2) \mathbb{Z} = a_1 \mathbb{Z} + a_2 \mathbb{Z}$. In this paper, we show that the classical Euclidean algorithm can be adapted in a very natural way to compute a basis of a general lattice $L(A_1, \ldots , A_n)$ given vectors $A_1, \ldots , A_n \in \mathbb{Z}^d$ with $n> \mathrm{rank}(a_1, \ldots ,a_d)$. Similar to the Euclidean algorithm, our algorithm is very easy to describe and implement and can be written within 12 lines of pseudocode. As our main result, we obtain an algorithm to compute a lattice basis for given vectors $A_1, \ldots , A_n \in \mathbb{Z}^d$ in time (counting bit operations) $LS + \tilde{O}((n-d)d^2 \cdot \log(||A||)$, where $LS$ is the time required to obtain the exact fractional solution of a certain system of linear equalities. The analysis of the running time of our algorithms relies on fundamental statements on the fractionality of solutions of linear systems of equations. So far, the fastest algorithm for lattice basis computation was due to Storjohann and Labhan [SL96] having a running time of $\tilde{O}(nd^\omega\log ||A||)$. For current upper bounds of $LS$, our algorithm has a running time improvement of a factor of at least $d^{0.12}$ over [SL96]. Our algorithm is therefore the first general algorithmic improvement to this classical problem in nearly 30 years. At last, we present a postprocessing procedure which yields an improved size bound of $\sqrt{d} ||A||$ for vectors of the resulting basis matrix.

We put forth Oblivious State Preparation (OSP) as a cryptographic primitive that unifies techniques developed in the context of a quantum server interacting with a classical client. OSP allows a classical polynomial-time sender to input a choice of one out of two public observables, and a quantum polynomial-time receiver to recover an eigenstate of the corresponding observable -- while keeping the sender's choice hidden from any malicious receiver. We obtain the following results: - The existence of (plain) trapdoor claw-free functions implies OSP, and the existence of dual-mode trapdoor claw-free functions implies round-optimal (two-round) OSP. - OSP implies the existence of proofs of quantumness, test of a qubit, blind classical delegation of quantum computation, and classical verification of quantum computation. - Two-round OSP implies quantum money with classical communication, classically-verifiable position verification, and (additionally assuming classical FHE with log-depth decryption) quantum FHE. Several of these applications were previously only known via tailored LWE-based constructions, whereas our OSP-based constructions yield new results from a wider variety of assumptions, including hard problems on cryptographic group actions. Finally, towards understanding the minimal hardness assumptions required to realize OSP, we prove the following: - OSP implies oblivious transfer between one classical and one quantum party. - Two-round OSP implies public-key encryption with classical keys and ciphertexts. In particular, these results help to ''explain'' the use of public-key cryptography in the known approaches to establishing a ''classical leash'' on a quantum server. For example, combined with a result of Austrin et al. (CRYPTO 22), we conclude that perfectly-correct OSP cannot exist unconditionally in the (quantum) random oracle model.

Approximating the action of a matrix function $f(\mathbf{A})$ on a vector $\mathbf{b}$ is an increasingly important primitive in machine learning, data science, and statistics, with applications such as sampling high dimensional Gaussians, Gaussian process regression and Bayesian inference, principle component analysis, and approximating Hessian spectral densities. Over the past decade, a number of algorithms enjoying strong theoretical guarantees have been proposed for this task. Many of the most successful belong to a family of algorithms called Krylov subspace methods. Remarkably, a classic Krylov subspace method, called the Lanczos method for matrix functions (Lanczos-FA), frequently outperforms newer methods in practice. Our main result is a theoretical justification for this finding: we show that, for a natural class of rational functions, Lanczos-FA matches the error of the best possible Krylov subspace method up to a multiplicative approximation factor. The approximation factor depends on the degree of $f(x)$'s denominator and the condition number of $\mathbf{A}$, but not on the number of iterations $k$. Our result provides a strong justification for the excellent performance of Lanczos-FA, especially on functions that are well approximated by rationals, such as the matrix square root.

We quantify the threat of network adversaries to inducing \emph{network overload} through \emph{routing attacks}, where a subset of network nodes are hijacked by an adversary. We develop routing attacks on the hijacked nodes for two objectives related to overload: \emph{no-loss throughput minimization} and \emph{loss maximization}. The first objective attempts to identify a routing attack that minimizes the network's throughput that is guaranteed to survive. We develop a polynomial-time algorithm that can output the optimal routing attack in multi-hop networks with global information on the network's topology, and an algorithm with an approximation ratio of $2$ under partial information. The second objective attempts to maximize the throughput loss. We demonstrate that this problem is NP-hard, and develop two approximation algorithms with multiplicative and additive guarantees respectively in single-hop networks. We further investigate the adversary's optimal selection of nodes to hijack that can maximize network overload. We propose a heuristic polynomial-time algorithm to solve this NP-hard problem, and prove its optimality in special cases. We validate the near-optimal performance of the proposed algorithms over a wide range of network settings. Our results demonstrate that the proposed algorithms can accurately quantify the risk of overload given an arbitrary set of hijacked nodes and identify the critical nodes that should be protected against routing attacks.

The Holant theorem is a powerful tool for studying the computational complexity of counting problems in the Holant framework. Due to the great expressiveness of the Holant framework, a converse to the Holant theorem would itself be a very powerful counting indistinguishability theorem. The most general converse does not hold, but we prove the following, still highly general, version: if any two sets of real-valued signatures are Holant-indistinguishable, then they are equivalent up to an orthogonal transformation. This resolves a partially open conjecture of Xia (2010). Consequences of this theorem include the well-known result that homomorphism counts from all graphs determine a graph up to isomorphism, the classical sufficient condition for simultaneous orthogonal similarity of sets of real matrices, and a combinatorial characterization of simultaneosly orthogonally decomposable (odeco) sets of symmetric tensors.

Neural machine translation (NMT) is a deep learning based approach for machine translation, which yields the state-of-the-art translation performance in scenarios where large-scale parallel corpora are available. Although the high-quality and domain-specific translation is crucial in the real world, domain-specific corpora are usually scarce or nonexistent, and thus vanilla NMT performs poorly in such scenarios. Domain adaptation that leverages both out-of-domain parallel corpora as well as monolingual corpora for in-domain translation, is very important for domain-specific translation. In this paper, we give a comprehensive survey of the state-of-the-art domain adaptation techniques for NMT.

北京阿比特科技有限公司