亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

An \emph{eight-partition} of a finite set of points (respectively, of a continuous mass distribution) in $\mathbb{R}^3$ consists of three planes that divide the space into $8$ octants, such that each open octant contains at most $1/8$ of the points (respectively, of the mass). In 1966, Hadwiger showed that any mass distribution in $\mathbb{R}^3$ admits an eight-partition; moreover, one can prescribe the normal direction of one of the three planes. The analogous result for finite point sets follows by a standard limit argument. We prove the following variant of this result: Any mass distribution (or point set) in $\mathbb{R}^3$ admits an eight-partition for which the intersection of two of the planes is a line with a prescribed direction. Moreover, we present an efficient algorithm for calculating an eight-partition of a set of $n$ points in~$\mathbb{R}^3$ (with prescribed normal direction of one of the planes) in time $O^{*}(n^{5/2})$.

相關內容

MASS:IEEE International Conference on Mobile Ad-hoc and Sensor Systems。 Explanation:移動Ad hoc和傳感器系統IEEE國際會議。 Publisher:IEEE。 SIT:

A subset $S$ of the Boolean hypercube $\mathbb{F}_2^n$ is a sumset if $S = A+A = \{a + b \ | \ a, b\in A\}$ for some $A \subseteq \mathbb{F}_2^n$. We prove that the number of sumsets in $\mathbb{F}_2^n$ is asymptotically $(2^n-1)2^{2^{n-1}}$. Furthermore, we show that the family of sumsets in $\mathbb{F}_2^n$ is almost identical to the family of all subsets of $\mathbb{F}_2^n$ that contain a complete linear subspace of co-dimension $1$.

We bound the smoothed running time of the FLIP algorithm for local Max-Cut as a function of $\alpha$, the arboricity of the input graph. We show that, with high probability, the following holds (where $n$ is the number of nodes and $\phi$ is the smoothing parameter): 1) When $\alpha = O(\log^{1-\epsilon} n)$ FLIP terminates in $\phi poly(n)$ iterations, where $\epsilon \in (0,1]$ is an arbitrarily small constant. Previous to our results the only graph families for which FLIP was known to achieve a smoothed polynomial running time were complete graphs and graphs with logarithmic maximum degree. 2) For arbitrary values of $\alpha$ we get a running time of $\phi n^{O(\frac{\alpha}{\log n} + \log \alpha)}$. This improves over the best known running time for general graphs of $\phi n^{O(\sqrt{ \log n })}$ for $\alpha = o(\log^{1.5} n)$. Specifically, when $\alpha = O(\log n)$ we get a significantly faster running time of $\phi n^{O(\log \log n)}$.

Let \({\mathbb K}\) be any field, let \(X\subset {\mathbb P}^{k-1}\) be a set of \(n\) distinct \({\mathbb K}\)-rational points, and let \(a\geq 1\) be an integer. In this paper we find lower bounds for the minimum distance \(d(X)_a\) of the evaluation code of order \(a\) associated to \(X\). The first results use \(\alpha(X)\), the initial degree of the defining ideal of \(X\), and the bounds are true for any set \(X\). In another result we use \(s(X)\), the minimum socle degree, to find a lower bound for the case when \(X\) is in general linear position. In both situations we improve and generalize known results.

We consider the constrained sampling problem where the goal is to sample from a target distribution $\pi(x)\propto e^{-f(x)}$ when $x$ is constrained to lie on a convex body $\mathcal{C}$. Motivated by penalty methods from continuous optimization, we propose penalized Langevin Dynamics (PLD) and penalized underdamped Langevin Monte Carlo (PULMC) methods that convert the constrained sampling problem into an unconstrained sampling problem by introducing a penalty function for constraint violations. When $f$ is smooth and gradients are available, we get $\tilde{\mathcal{O}}(d/\varepsilon^{10})$ iteration complexity for PLD to sample the target up to an $\varepsilon$-error where the error is measured in the TV distance and $\tilde{\mathcal{O}}(\cdot)$ hides logarithmic factors. For PULMC, we improve the result to $\tilde{\mathcal{O}}(\sqrt{d}/\varepsilon^{7})$ when the Hessian of $f$ is Lipschitz and the boundary of $\mathcal{C}$ is sufficiently smooth. To our knowledge, these are the first convergence results for underdamped Langevin Monte Carlo methods in the constrained sampling that handle non-convex $f$ and provide guarantees with the best dimension dependency among existing methods with deterministic gradient. If unbiased stochastic estimates of the gradient of $f$ are available, we propose PSGLD and PSGULMC methods that can handle stochastic gradients and are scaleable to large datasets without requiring Metropolis-Hasting correction steps. For PSGLD and PSGULMC, when $f$ is strongly convex and smooth, we obtain $\tilde{\mathcal{O}}(d/\varepsilon^{18})$ and $\tilde{\mathcal{O}}(d\sqrt{d}/\varepsilon^{39})$ iteration complexity in W2 distance. When $f$ is smooth and can be non-convex, we provide finite-time performance bounds and iteration complexity results. Finally, we illustrate the performance on Bayesian LASSO regression and Bayesian constrained deep learning problems.

We consider the maximization of a submodular objective function $f:2^U\to\mathbb{R}_{\geq 0}$, where the objective $f$ is not accessed as a value oracle but instead subject to noisy queries. We introduce a versatile adaptive sampling procedure called which determines whether the marginal gain of the function $f$ is approximately above or below an input threshold with high probability in as few noisy samples as possible. Using the sampling procedure as a subroutine, we propose sample efficient algorithms for monotone submodular maximization with cardinality and matroid constraints, as well as unconstrained non-monotone submodular maximization. The proposed algorithms achieve approximation guarantees arbitrarily close to those of the standard value oracle setting. We further provide an experimental evaluation on real instances of submodular maximization and demonstrate the sample efficiency of our proposed algorithm relative to alternative approaches.

We give improved lower bounds for binary $3$-query locally correctable codes (3-LCCs) $C \colon \{0,1\}^k \rightarrow \{0,1\}^n$. Specifically, we prove: (1) If $C$ is a linear design 3-LCC, then $n \geq 2^{(1 - o(1))\sqrt{k} }$. A design 3-LCC has the additional property that the correcting sets for every codeword bit form a perfect matching and every pair of codeword bits is queried an equal number of times across all matchings. Our bound is tight up to a factor $\sqrt{8}$ in the exponent of $2$, as the best construction of binary $3$-LCCs (obtained by taking Reed-Muller codes on $\mathbb{F}_4$ and applying a natural projection map) is a design $3$-LCC with $n \leq 2^{\sqrt{8 k}}$. Up to a $\sqrt{8}$ factor, this resolves the Hamada conjecture on the maximum $\mathbb{F}_2$-codimension of a $4$-design. (2) If $C$ is a smooth, non-linear $3$-LCC with near-perfect completeness, then, $n \geq k^{\Omega(\log k)}$. (3) If $C$ is a smooth, non-linear $3$-LCC with completeness $1 - \varepsilon$, then $n \geq \tilde{\Omega}(k^{\frac{1}{2\varepsilon}})$. In particular, when $\varepsilon$ is a small constant, this implies a lower bound for general non-linear LCCs that beats the prior best $n \geq \tilde{\Omega}(k^3)$ lower bound of [AGKM23] by a polynomial factor. Our design LCC lower bound is obtained via a fine-grained analysis of the Kikuchi matrix method applied to a variant of the matrix used in [KM23]. Our lower bounds for non-linear codes are obtained by designing a from-scratch reduction from nonlinear $3$-LCCs to a system of "chain polynomial equations": polynomial equations with similar structure to the long chain derivations that arise in the lower bounds for linear $3$-LCCs [KM23].

We present a pseudopolynomial-time algorithm for the Knapsack problem that has running time $\widetilde{O}(n + t\sqrt{p_{\max}})$, where $n$ is the number of items, $t$ is the knapsack capacity, and $p_{\max}$ is the maximum item profit. This improves over the $\widetilde{O}(n + t \, p_{\max})$-time algorithm based on the convolution and prediction technique by Bateni et al.~(STOC 2018). Moreover, we give some evidence, based on a strengthening of the Min-Plus Convolution Hypothesis, that our running time might be optimal. Our algorithm uses two new technical tools, which might be of independent interest. First, we generalize the $\widetilde{O}(n^{1.5})$-time algorithm for bounded monotone min-plus convolution by Chi et al.~(STOC 2022) to the \emph{rectangular} case where the range of entries can be different from the sequence length. Second, we give a reduction from general knapsack instances to \emph{balanced} instances, where all items have nearly the same profit-to-weight ratio, up to a constant factor. Using these techniques, we can also obtain algorithms that run in time $\widetilde{O}(n + OPT\sqrt{w_{\max}})$, $\widetilde{O}(n + (nw_{\max}p_{\max})^{1/3}t^{2/3})$, and $\widetilde{O}(n + (nw_{\max}p_{\max})^{1/3} OPT^{2/3})$, where $OPT$ is the optimal total profit and $w_{\max}$ is the maximum item weight.

We present an algebraic algorithm that computes the composition of two power series in $\mathop{\tilde{\mathrm O}}(n)$ time complexity. The previous best algorithms are $\mathop{\mathrm O}(n^{1+o(1)})$ by Kedlaya and Umans (FOCS 2008) and an $\mathop{\mathrm O}(n^{1.43})$ algebraic algorithm by Neiger, Salvy, Schost and Villard (JACM 2023). Our algorithm builds upon the recent Graeffe iteration approach to manipulate rational power series introduced by Bostan and Mori (SOSA 2021).

Let $\Omega = [0,1]^d$ be the unit cube in $\mathbb{R}^d$. We study the problem of how efficiently, in terms of the number of parameters, deep neural networks with the ReLU activation function can approximate functions in the Sobolev spaces $W^s(L_q(\Omega))$ and Besov spaces $B^s_r(L_q(\Omega))$, with error measured in the $L_p(\Omega)$ norm. This problem is important when studying the application of neural networks in a variety of fields, including scientific computing and signal processing, and has previously been solved only when $p=q=\infty$. Our contribution is to provide a complete solution for all $1\leq p,q\leq \infty$ and $s > 0$ for which the corresponding Sobolev or Besov space compactly embeds into $L_p$. The key technical tool is a novel bit-extraction technique which gives an optimal encoding of sparse vectors. This enables us to obtain sharp upper bounds in the non-linear regime where $p > q$. We also provide a novel method for deriving $L_p$-approximation lower bounds based upon VC-dimension when $p < \infty$. Our results show that very deep ReLU networks significantly outperform classical methods of approximation in terms of the number of parameters, but that this comes at the cost of parameters which are not encodable.

Given a matrix $M\in \mathbb{R}^{m\times n}$, the low rank matrix completion problem asks us to find a rank-$k$ approximation of $M$ as $UV^\top$ for $U\in \mathbb{R}^{m\times k}$ and $V\in \mathbb{R}^{n\times k}$ by only observing a few entries specified by a set of entries $\Omega\subseteq [m]\times [n]$. In particular, we examine an approach that is widely used in practice -- the alternating minimization framework. Jain, Netrapalli, and Sanghavi [JNS13] showed that if $M$ has incoherent rows and columns, then alternating minimization provably recovers the matrix $M$ by observing a nearly linear in $n$ number of entries. While the sample complexity has been subsequently improved [GLZ17], alternating minimization steps are required to be computed exactly. This hinders the development of more efficient algorithms and fails to depict the practical implementation of alternating minimization, where the updates are usually performed approximately in favor of efficiency. In this paper, we take a major step towards a more efficient and error-robust alternating minimization framework. To this end, we develop an analytical framework for alternating minimization that can tolerate a moderate amount of errors caused by approximate updates. Moreover, our algorithm runs in time $\widetilde O(|\Omega| k)$, which is nearly linear in the time to verify the solution while preserving the sample complexity. This improves upon all prior known alternating minimization approaches which require $\widetilde O(|\Omega| k^2)$ time.

北京阿比特科技有限公司