亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Let $(P,E)$ be a $(d+1)$-uniform geometric hypergraph, where $P$ is an $n$-point set in general position in $\mathbb{R}^d$ and $E\subseteq {P\choose d+1}$ is a collection of $\epsilon{n\choose d+1}$ $d$-dimensional simplices with vertices in $P$, for $0<\epsilon\leq 1$. We show that there is a point $x\in {\mathbb R}^d$ that pierces $\displaystyle \Omega\left(\epsilon^{(d^4+d)(d+1)+\delta}{n\choose d+1}\right)$ simplices in $E$, for any fixed $\delta>0$. This is a dramatic improvement in all dimensions $d\geq 3$, over the previous lower bounds of the general form $\displaystyle \epsilon^{(cd)^{d+1}}n^{d+1}$, which date back to the seminal 1991 work of Alon, B\'{a}r\'{a}ny, F\"{u}redi and Kleitman. As a result, any $n$-point set in general position in $\mathbb{R}^d$ admits only $\displaystyle O\left(n^{d-\frac{1}{d(d-1)^4+d(d-1)}+\delta}\right)$ halving hyperplanes, for any $\delta>0$, which is a significant improvement over the previously best known bound $\displaystyle O\left(n^{d-\frac{1}{(2d)^{d}}}\right)$ in all dimensions $d\geq 5$. An essential ingredient of our proof is the following semi-algebraic Tur\'an-type result of independent interest: Let $(V_1,\ldots,V_k,E)$ be a hypergraph of bounded semi-algebraic description complexity in ${\mathbb R}^d$ that satisfies $|E|\geq \varepsilon |V_1|\cdot\ldots \cdot |V_k|$ for some $\varepsilon>0$. Then there exist subsets $W_i\subseteq V_i$ that satisfy $W_1\times W_2\times\ldots\times W_k\subseteq E$, and $|W_1|\cdot\ldots\cdots|W_k|=\Omega\left(\varepsilon^{d(k-1)+1}|V_1|\cdot |V_2|\cdot\ldots\cdot|V_k|\right)$.

相關內容

We compare the $(1,\lambda)$-EA and the $(1 + \lambda)$-EA on the recently introduced benchmark DisOM, which is the OneMax function with randomly planted local optima. Previous work showed that if all local optima have the same relative height, then the plus strategy never loses more than a factor $O(n\log n)$ compared to the comma strategy. Here we show that even small random fluctuations in the heights of the local optima have a devastating effect for the plus strategy and lead to super-polynomial runtimes. On the other hand, due to their ability to escape local optima, comma strategies are unaffected by the height of the local optima and remain efficient. Our results hold for a broad class of possible distortions and show that the plus strategy, but not the comma strategy, is generally deceived by sparse unstructured fluctuations of a smooth landscape.

We consider the constrained sampling problem where the goal is to sample from a target distribution $\pi(x)\propto e^{-f(x)}$ when $x$ is constrained to lie on a convex body $\mathcal{C}$. Motivated by penalty methods from continuous optimization, we propose penalized Langevin Dynamics (PLD) and penalized underdamped Langevin Monte Carlo (PULMC) methods that convert the constrained sampling problem into an unconstrained sampling problem by introducing a penalty function for constraint violations. When $f$ is smooth and gradients are available, we get $\tilde{\mathcal{O}}(d/\varepsilon^{10})$ iteration complexity for PLD to sample the target up to an $\varepsilon$-error where the error is measured in the TV distance and $\tilde{\mathcal{O}}(\cdot)$ hides logarithmic factors. For PULMC, we improve the result to $\tilde{\mathcal{O}}(\sqrt{d}/\varepsilon^{7})$ when the Hessian of $f$ is Lipschitz and the boundary of $\mathcal{C}$ is sufficiently smooth. To our knowledge, these are the first convergence results for underdamped Langevin Monte Carlo methods in the constrained sampling that handle non-convex $f$ and provide guarantees with the best dimension dependency among existing methods with deterministic gradient. If unbiased stochastic estimates of the gradient of $f$ are available, we propose PSGLD and PSGULMC methods that can handle stochastic gradients and are scaleable to large datasets without requiring Metropolis-Hasting correction steps. For PSGLD and PSGULMC, when $f$ is strongly convex and smooth, we obtain $\tilde{\mathcal{O}}(d/\varepsilon^{18})$ and $\tilde{\mathcal{O}}(d\sqrt{d}/\varepsilon^{39})$ iteration complexity in W2 distance. When $f$ is smooth and can be non-convex, we provide finite-time performance bounds and iteration complexity results. Finally, we illustrate the performance on Bayesian LASSO regression and Bayesian constrained deep learning problems.

Let $X=X_1\sqcup X_2\sqcup\ldots\sqcup X_k$ be a partitioned set of variables such that the variables in each part $X_i$ are noncommuting but for any $i\neq j$, the variables $x\in X_i$ commute with the variables $x'\in X_j$. Given as input a square matrix $T$ whose entries are linear forms over $\mathbb{Q}\langle{X}\rangle$, we consider the problem of checking if $T$ is invertible or not over the universal skew field of fractions of the partially commutative polynomial ring $\mathbb{Q}\langle{X}\rangle$ [Klep-Vinnikov-Volcic (2020)]. In this paper, we design a deterministic polynomial-time algorithm for this problem for constant $k$. The special case $k=1$ is the noncommutative Edmonds' problem (NSINGULAR) which has a deterministic polynomial-time algorithm by recent results [Garg-Gurvits-Oliveira-Wigderson (2016), Ivanyos-Qiao-Subrahmanyam (2018), Hamada-Hirai (2021)]. En-route, we obtain the first deterministic polynomial-time algorithm for the equivalence testing problem of $k$-tape \emph{weighted} automata (for constant $k$) resolving a long-standing open problem [Harju and Karhum"{a}ki(1991), Worrell (2013)]. Algebraically, the equivalence problem reduces to testing whether a partially commutative rational series over the partitioned set $X$ is zero or not [Worrell (2013)]. Decidability of this problem was established by Harju and Karhum\"{a}ki (1991). Prior to this work, a \emph{randomized} polynomial-time algorithm for this problem was given by Worrell (2013) and, subsequently, a deterministic quasipolynomial-time algorithm was also developed [Arvind et al. (2021)].

We consider the maximization of a submodular objective function $f:2^U\to\mathbb{R}_{\geq 0}$, where the objective $f$ is not accessed as a value oracle but instead subject to noisy queries. We introduce a versatile adaptive sampling procedure called which determines whether the marginal gain of the function $f$ is approximately above or below an input threshold with high probability in as few noisy samples as possible. Using the sampling procedure as a subroutine, we propose sample efficient algorithms for monotone submodular maximization with cardinality and matroid constraints, as well as unconstrained non-monotone submodular maximization. The proposed algorithms achieve approximation guarantees arbitrarily close to those of the standard value oracle setting. We further provide an experimental evaluation on real instances of submodular maximization and demonstrate the sample efficiency of our proposed algorithm relative to alternative approaches.

Let $\Omega = [0,1]^d$ be the unit cube in $\mathbb{R}^d$. We study the problem of how efficiently, in terms of the number of parameters, deep neural networks with the ReLU activation function can approximate functions in the Sobolev spaces $W^s(L_q(\Omega))$ and Besov spaces $B^s_r(L_q(\Omega))$, with error measured in the $L_p(\Omega)$ norm. This problem is important when studying the application of neural networks in a variety of fields, including scientific computing and signal processing, and has previously been solved only when $p=q=\infty$. Our contribution is to provide a complete solution for all $1\leq p,q\leq \infty$ and $s > 0$ for which the corresponding Sobolev or Besov space compactly embeds into $L_p$. The key technical tool is a novel bit-extraction technique which gives an optimal encoding of sparse vectors. This enables us to obtain sharp upper bounds in the non-linear regime where $p > q$. We also provide a novel method for deriving $L_p$-approximation lower bounds based upon VC-dimension when $p < \infty$. Our results show that very deep ReLU networks significantly outperform classical methods of approximation in terms of the number of parameters, but that this comes at the cost of parameters which are not encodable.

We investigate the rationality of Weil sums of binomials of the form $W^{K,s}_u=\sum_{x \in K} \psi(x^s - u x)$, where $K$ is a finite field whose canonical additive character is $\psi$, and where $u$ is an element of $K^{\times}$ and $s$ is a positive integer relatively prime to $|K^\times|$, so that $x \mapsto x^s$ is a permutation of $K$. The Weil spectrum for $K$ and $s$, which is the family of values $W^{K,s}_u$ as $u$ runs through $K^\times$, is of interest in arithmetic geometry and in several information-theoretic applications. The Weil spectrum always contains at least three distinct values if $s$ is nondegenerate (i.e., if $s$ is not a power of $p$ modulo $|K^\times|$, where $p$ is the characteristic of $K$). It is already known that if the Weil spectrum contains precisely three distinct values, then they must all be rational integers. We show that if the Weil spectrum contains precisely four distinct values, then they must all be rational integers, with the sole exception of the case where $|K|=5$ and $s \equiv 3 \pmod{4}$.

We study the algorithmic task of finding large independent sets in Erdos-Renyi $r$-uniform hypergraphs on $n$ vertices having average degree $d$. Krivelevich and Sudakov showed that the maximum independent set has density $\left(\frac{r\log d}{(r-1)d}\right)^{1/(r-1)}$. We show that the class of low-degree polynomial algorithms can find independent sets of density $\left(\frac{\log d}{(r-1)d}\right)^{1/(r-1)}$ but no larger. This extends and generalizes earlier results of Gamarnik and Sudan, Rahman and Virag, and Wein on graphs, and answers a question of Bal and Bennett. We conjecture that this statistical-computational gap holds for this problem. Additionally, we explore the universality of this gap by examining $r$-partite hypergraphs. A hypergraph $H=(V,E)$ is $r$-partite if there is a partition $V=V_1\cup\cdots\cup V_r$ such that each edge contains exactly one vertex from each set $V_i$. We consider the problem of finding large balanced independent sets (independent sets containing the same number of vertices in each partition) in random $r$-partite hypergraphs with $n$ vertices in each partition and average degree $d$. We prove that the maximum balanced independent set has density $\left(\frac{r\log d}{(r-1)d}\right)^{1/(r-1)}$ asymptotically. Furthermore, we prove an analogous low-degree computational threshold of $\left(\frac{\log d}{(r-1)d}\right)^{1/(r-1)}$. Our results recover and generalize recent work of Perkins and the second author on bipartite graphs. While the graph case has been extensively studied, this work is the first to consider statistical-computational gaps of optimization problems on random hypergraphs. Our results suggest that these gaps persist for larger uniformities as well as across many models. A somewhat surprising aspect of the gap for balanced independent sets is that the algorithm achieving the lower bound is a simple degree-1 polynomial.

For a $P$-indexed persistence module ${\sf M}$, the (generalized) rank of ${\sf M}$ is defined as the rank of the limit-to-colimit map for the diagram of vector spaces of ${\sf M}$ over the poset $P$. For $2$-parameter persistence modules, recently a zigzag persistence based algorithm has been proposed that takes advantage of the fact that generalized rank for $2$-parameter modules is equal to the number of full intervals in a zigzag module defined on the boundary of the poset. Analogous definition of boundary for $d$-parameter persistence modules or general $P$-indexed persistence modules does not seem plausible. To overcome this difficulty, we first unfold a given $P$-indexed module ${\sf M}$ into a zigzag module ${\sf M}_{ZZ}$ and then check how many full interval modules in a decomposition of ${\sf M}_{ZZ}$ can be folded back to remain full in a decomposition of ${\sf M}$. This number determines the generalized rank of ${\sf M}$. For special cases of degree-$d$ homology for $d$-complexes, we obtain a more efficient algorithm including a linear time algorithm for degree-$1$ homology in graphs.

Given a matrix $M\in \mathbb{R}^{m\times n}$, the low rank matrix completion problem asks us to find a rank-$k$ approximation of $M$ as $UV^\top$ for $U\in \mathbb{R}^{m\times k}$ and $V\in \mathbb{R}^{n\times k}$ by only observing a few entries specified by a set of entries $\Omega\subseteq [m]\times [n]$. In particular, we examine an approach that is widely used in practice -- the alternating minimization framework. Jain, Netrapalli, and Sanghavi [JNS13] showed that if $M$ has incoherent rows and columns, then alternating minimization provably recovers the matrix $M$ by observing a nearly linear in $n$ number of entries. While the sample complexity has been subsequently improved [GLZ17], alternating minimization steps are required to be computed exactly. This hinders the development of more efficient algorithms and fails to depict the practical implementation of alternating minimization, where the updates are usually performed approximately in favor of efficiency. In this paper, we take a major step towards a more efficient and error-robust alternating minimization framework. To this end, we develop an analytical framework for alternating minimization that can tolerate a moderate amount of errors caused by approximate updates. Moreover, our algorithm runs in time $\widetilde O(|\Omega| k)$, which is nearly linear in the time to verify the solution while preserving the sample complexity. This improves upon all prior known alternating minimization approaches which require $\widetilde O(|\Omega| k^2)$ time.

For a field $\mathbb{F}$ and integers $d$ and $k$, a set ${\cal A} \subseteq \mathbb{F}^d$ is called $k$-nearly orthogonal if its members are non-self-orthogonal and every $k+1$ vectors of ${\cal A}$ include an orthogonal pair. We prove that for every prime $p$ there exists some $\delta = \delta(p)>0$, such that for every field $\mathbb{F}$ of characteristic $p$ and for all integers $k \geq 2$ and $d \geq k$, there exists a $k$-nearly orthogonal set of at least $d^{\delta \cdot k/\log k}$ vectors of $\mathbb{F}^d$. The size of the set is optimal up to the $\log k$ term in the exponent. We further prove two extensions of this result. In the first, we provide a large set ${\cal A}$ of non-self-orthogonal vectors of $\mathbb{F}^d$ such that for every two subsets of ${\cal A}$ of size $k+1$ each, some vector of one of the subsets is orthogonal to some vector of the other. In the second extension, every $k+1$ vectors of the produced set ${\cal A}$ include $\ell+1$ pairwise orthogonal vectors for an arbitrary fixed integer $1 \leq \ell \leq k$. The proofs involve probabilistic and spectral arguments and the hypergraph container method.

北京阿比特科技有限公司