亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We show that a constant factor approximation of the shortest and closest lattice vector problem in any norm can be computed in time $2^{0.802\, n}$. This contrasts the corresponding $2^n$ time, (gap)-SETH based lower bounds for these problems that even apply for small constant approximation. For both problems, $\mathrm{SVP}$ and $\mathrm{CVP}$, we reduce to the case of the Euclidean norm. A key technical ingredient in that reduction is a twist of Milman's construction of an $M$-ellipsoid which approximates any symmetric convex body $K$ with an ellipsoid $\mathcal{E}$ so that $2^{\varepsilon n}$ translates of a constant scaling of $\mathcal{E}$ can cover $K$ and vice versa.

相關內容

The Bayesian solution to a statistical inverse problem can be summarised by a mode of the posterior distribution, i.e. a MAP estimator. The MAP estimator essentially coincides with the (regularised) variational solution to the inverse problem, seen as minimisation of the Onsager-Machlup functional of the posterior measure. An open problem in the stability analysis of inverse problems is to establish a relationship between the convergence properties of solutions obtained by the variational approach and by the Bayesian approach. To address this problem, we propose a general convergence theory for modes that is based on the $\Gamma$-convergence of Onsager-Machlup functionals, and apply this theory to Bayesian inverse problems with Gaussian and edge-preserving Besov priors. Part II of this paper considers more general prior distributions.

In this paper we study the recursive sequence $x_{n+1}=\frac{x_n+f(x_n)}{2}$ for each continuous real-valued function $f$ on an interval $[a,b]$, where $x_0$ is an arbitrary point in $[a,b]$. First, we present some results for real-valued continuous function $f$ on $[a,b]$ which have a unique fixed point $c\in (a,b)$ and show that the sequence $\{x_n\}$ converges to $c$ provided that $f$ satisfies some conditions. By assuming that $c$ is a root of $f$ instead of being its fixed point, we extend these results. We define two other sequences by $x^{+}_0=x^{-}_0=x_0\in [a,b]$ and $x^{+}_{n+1}=x^{+}_n+\frac{f(x^{+}_n)}{2}$ and $x^{-}_{n+1}= x^{-}_n-\frac{f(x^{-}_n)}{2}$ for each $n\ge 0$. We show that for each real-valued continuous function $f$ on $[a,b]$ with $f(a)>0>f(b)$ which has a unique root $c\in (a,b)$, the sequence $\{x^{+}_n\}$ converges to $c$ provided that $f^{'}\ge -2$ on $(a,b)$. Accordingly we show that for each real-valued continuous function $f$ on $[a,b]$ with $f(a)<0<f(b)$ which has a unique root $c\in (a,b)$, the sequence $\{x^{-}_n\}$ converges to $c$ provided that $f^{'}\le 2$ on $(a,b)$. By an example we also show that there exists some continuous real-valued function $f:[a,b]\to [a,b]$ such that the sequence $\{x_{n}\}$ does not converge for some $x_0\in [a,b]$.

We present a new data structure to approximate accurately and efficiently a polynomial $f$ of degree $d$ given as a list of coefficients. Its properties allow us to improve the state-of-the-art bounds on the bit complexity for the problems of root isolation and approximate multipoint evaluation. This data structure also leads to a new geometric criterion to detect ill-conditioned polynomials, implying notably that the standard condition number of the zeros of a polynomial is at least exponential in the number of roots of modulus less than $1/2$ or greater than $2$.Given a polynomial $f$ of degree $d$ with $\|f\|_1 \leq 2^\tau$ for $\tau \geq 1$, isolating all its complex roots or evaluating it at $d$ points can be done with a quasi-linear number of arithmetic operations. However, considering the bit complexity, the state-of-the-art algorithms require at least $d^{3/2}$ bit operations even for well-conditioned polynomials and when the accuracy required is low. Given a positive integer $m$, we can compute our new data structure and evaluate $f$ at $d$ points in the unit disk with an absolute error less than $2^{-m}$ in $\widetilde O(d(\tau+m))$ bit operations, where $\widetilde O(\cdot)$ means that we omit logarithmic factors. We also show that if $\kappa$ is the absolute condition number of the zeros of $f$, then we can isolate all the roots of $f$ in $\widetilde O(d(\tau + \log \kappa))$ bit operations. Moreover, our algorithms are simple to implement. For approximating the complex roots of a polynomial, we implemented a small prototype in \verb|Python/NumPy| that is an order of magnitude faster than the state-of-the-art solver \verb/MPSolve/ for high degree polynomials with random coefficients.

We study the connections between the notions of combinatorial discrepancy and graph degeneracy. In particular, we prove that the maximum discrepancy over all subgraphs $H$ of a graph $G$ of the neighborhood set system of $H$ is sandwiched between $\Omega(\log\mathrm{deg}(G))$ and $\mathcal{O}(\mathrm{deg}(G))$, where $\mathrm{deg}(G)$ denotes the degeneracy of $G$. We extend this result to inequalities relating weak coloring numbers and discrepancy of graph powers and deduce a new characterization of bounded expansion classes. Then, we switch to a model theoretical point of view, introduce pointer structures, and study their relations to graph classes with bounded expansion. We deduce that a monotone class of graphs has bounded expansion if and only if all the set systems definable in this class have bounded hereditary discrepancy. Using known bounds on the VC-density of set systems definable in nowhere dense classes we also give a characterization of nowhere dense classes in terms of discrepancy. As consequences of our results, we obtain a corollary on the discrepancy of neighborhood set systems of edge colored graphs, a polynomial-time algorithm to compute $\varepsilon$-approximations of size $\mathcal{O}(1/\varepsilon)$ for set systems definable in bounded expansion classes, an application to clique coloring, and even the non-existence of a quantifier elimination scheme for nowhere dense classes.

It is well-known that an algorithm exists which approximates the NP-complete problem of Set Cover within a factor of ln(n), and it was recently proven that this approximation ratio is optimal unless P = NP. This optimality result is the product of many advances in characterizations of NP, in terms of interactive proof systems and probabilistically checkable proofs (PCP), and improvements to the analyses thereof. However, as a result, it is difficult to extract the development of Set Cover approximation bounds from the greater scope of proof system analysis. This paper attempts to present a chronological progression of results on lower-bounding the approximation ratio of Set Cover. We analyze a series of proofs of progressively better bounds and unify the results under similar terminologies and frameworks to provide an accurate comparison of proof techniques and their results. We also treat many preliminary results as black-boxes to better focus our analysis on the core reductions to Set Cover instances. The result is alternative versions of several hardness proofs, beginning with initial inapproximability results and culminating in a version of the proof that ln(n) is a tight lower bound.

Given a simple graph $G$ and an integer $k$, the goal of $k$-Clique problem is to decide if $G$ contains a complete subgraph of size $k$. We say an algorithm approximates $k$-Clique within a factor $g(k)$ if it can find a clique of size at least $k / g(k)$ when $G$ is guaranteed to have a $k$-clique. Recently, it was shown that approximating $k$-Clique within a constant factor is W[1]-hard [Lin21]. We study the approximation of $k$-Clique under the Exponential Time Hypothesis (ETH). The reduction of [Lin21] already implies an $n^{\Omega(\sqrt[6]{\log k})}$-time lower bound under ETH. We improve this lower bound to $n^{\Omega(\log k)}$. Using the gap-amplification technique by expander graphs, we also prove that there is no $k^{o(1)}$ factor FPT-approximation algorithm for $k$-Clique under ETH. We also suggest a new way to prove the Parameterized Inapproximability Hypothesis (PIH) under ETH. We show that if there is no $n^{O(\frac{k}{\log k})}$ algorithm to approximate $k$-Clique within a constant factor, then PIH is true.

We prove the RLWE/PLWE equivalence for the maximal totally real subextension of the $2^rpq$-th cyclotomic field and discuss some of its applications to cryptoanalysis.

For an $n$-vertex digraph $G=(V,E)$, a \emph{shortcut set} is a (small) subset of edges $H$ taken from the transitive closure of $G$ that, when added to $G$ guarantees that the diameter of $G \cup H$ is small. Shortcut sets, introduced by Thorup in 1993, have a wide range of applications in algorithm design, especially in the context of parallel, distributed and dynamic computation on directed graphs. A folklore result in this context shows that every $n$-vertex digraph admits a shortcut set of linear size (i.e., of $O(n)$ edges) that reduces the diameter to $\widetilde{O}(\sqrt{n})$. Despite extensive research over the years, the question of whether one can reduce the diameter to $o(\sqrt{n})$ with $\widetilde{O}(n)$ shortcut edges has been left open. We provide the first improved diameter-sparsity tradeoff for this problem, breaking the $\sqrt{n}$ diameter barrier. Specifically, we show an $O(n^{\omega})$-time randomized algorithm for computing a linear shortcut set that reduces the diameter of the digraph to $\widetilde{O}(n^{1/3})$. This narrows the gap w.r.t the current diameter lower bound of $\Omega(n^{1/6})$ by [Huang and Pettie, SWAT'18]. Moreover, we show that a diameter of $\widetilde{O}(n^{1/2})$ can in fact be achieved with a \emph{sublinear} number of $O(n^{3/4})$ shortcut edges. Formally, letting $S(n,D)$ be the bound on the size of the shortcut set required in order to reduce the diameter of any $n$-vertex digraph to at most $D$, our algorithms yield: \[ S(n,D)=\begin{cases} \widetilde{O}(n^2/D^3), & \text{for~} D\leq n^{1/3},\\ \widetilde{O}((n/D)^{3/2}), & \text{for~} D> n^{1/3}~. \end{cases} \] We also extend our algorithms to provide improved $(\beta,\epsilon)$ hopsets for $n$-vertex weighted directed graphs.

A common approach to tackle a combinatorial optimization problem is to first solve a continuous relaxation and then round the obtained fractional solution. For the latter, the framework of contention resolution schemes (or CR schemes), introduced by Chekuri, Vondrak, and Zenklusen, is a general and successful tool. A CR scheme takes a fractional point $x$ in a relaxation polytope, rounds each coordinate $x_i$ independently to get a possibly non-feasible set, and then drops some elements in order to satisfy the independence constraints. Intuitively, a CR scheme is $c$-balanced if every element $i$ is selected with probability at least $c \cdot x_i$. It is known that general matroids admit a $(1-1/e)$-balanced CR scheme, and that this is (asymptotically) optimal. This is in particular true for the special case of uniform matroids of rank one. In this work, we provide a simple and explicit monotone CR scheme with a balancedness of $1 - \binom{n}{k}\:\left(1-\frac{k}{n}\right)^{n+1-k}\:\left(\frac{k}{n}\right)^k$, and show that this is optimal. As $n$ grows, this expression converges from above to $1 - e^{-k}k^k/k!$. While this asymptotic bound can be obtained by combining previously known results, these require defining an exponential-sized linear program, as well as using random sampling and the ellipsoid algorithm. Our procedure, on the other hand, has the advantage of being simple and explicit. Moreover, this scheme generalizes into an optimal CR scheme for partition matroids.

While Generative Adversarial Networks (GANs) have empirically produced impressive results on learning complex real-world distributions, recent work has shown that they suffer from lack of diversity or mode collapse. The theoretical work of Arora et al.~\cite{AroraGeLiMaZh17} suggests a dilemma about GANs' statistical properties: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse. In contrast, we show in this paper that GANs can in principle learn distributions in Wasserstein distance (or KL-divergence in many cases) with polynomial sample complexity, if the discriminator class has strong distinguishing power against the particular generator class (instead of against all possible generators). For various generator classes such as mixture of Gaussians, exponential families, and invertible neural networks generators, we design corresponding discriminators (which are often neural nets of specific architectures) such that the Integral Probability Metric (IPM) induced by the discriminators can provably approximate the Wasserstein distance and/or KL-divergence. This implies that if the training is successful, then the learned distribution is close to the true distribution in Wasserstein distance or KL divergence, and thus cannot drop modes. Our preliminary experiments show that on synthetic datasets the test IPM is well correlated with KL divergence, indicating that the lack of diversity may be caused by the sub-optimality in optimization instead of statistical inefficiency.

北京阿比特科技有限公司