亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A sequence $\pi_1,\pi_2,\dots$ of permutations is said to be "quasirandom" if the induced density of every permutation $\sigma$ in $\pi_n$ converges to $1/|\sigma|!$ as $n\to\infty$. We prove that $\pi_1,\pi_2,\dots$ is quasirandom if and only if the density of each permutation $\sigma$ in the set $$\{123,321,2143,3412,2413,3142\}$$ converges to $1/|\sigma|!$. Previously, the smallest cardinality of a set with this property, called a "quasirandom-forcing" set, was known to be between four and eight. In fact, we show that there is a single linear expression of the densities of the six permutations in this set which forces quasirandomness and show that this is best possible in the sense that there is no shorter linear expression of permutation densities with positive coefficients with this property. In the language of theoretical statistics, this expression provides a new nonparametric independence test for bivariate continuous distributions related to Spearman's $\rho$.

相關內容

We present two new explicit constructions of Cayley high dimensional expanders (HDXs) over the abelian group $\mathbb{F}_2^n$. Our expansion proofs use only linear algebra and combinatorial arguments. The first construction gives local spectral HDXs of any constant dimension and subpolynomial degree $\exp(n^\epsilon)$ for every $\epsilon >0$, improving on a construction by Golowich [Gol23] which achieves $\epsilon =1/2$. [Gol23] derives these HDXs by sparsifying the complete Grassmann poset of subspaces. The novelty in our construction is the ability to sparsify any expanding Grassmannian posets, leading to iterated sparsification and much smaller degrees. The sparse Grassmannian (which is of independent interest in the theory of HDXs) serves as the generating set of the Cayley graph. Our second construction gives a 2-dimensional HDXs of any polynomial degree $\exp(\epsilon n$) for any constant $\epsilon > 0$, which is simultaneously a spectral expander and a coboundary expander. To the best of our knowledge, this is the first such non-trivial construction. We name it the Johnson complex, as it is derived from the classical Johnson scheme, whose vertices serve as the generating set of this Cayley graph. This construction may be viewed as a derandomization of the recent random geometric complexes of [LMSY23]. Establishing coboundary expansion through Gromov's "cone method" and the associated isoperimetric inequalities is the most intricate aspect of this construction. While these two constructions are quite different, we show that they both share a common structure, resembling the intersection patterns of vectors in the Hadamard code. We propose a general framework of such "Hadamard-like" constructions in the hope that it will yield new HDXs.

We characterize the power of constant-depth Boolean circuits in generating uniform symmetric distributions. Let $f\colon\{0,1\}^m\to\{0,1\}^n$ be a Boolean function where each output bit of $f$ depends only on $O(1)$ input bits. Assume the output distribution of $f$ on uniform input bits is close to a uniform distribution $D$ with a symmetric support. We show that $D$ is essentially one of the following six possibilities: (1) point distribution on $0^n$, (2) point distribution on $1^n$, (3) uniform over $\{0^n,1^n\}$, (4) uniform over strings with even Hamming weights, (5) uniform over strings with odd Hamming weights, and (6) uniform over all strings. This confirms a conjecture of Filmus, Leigh, Riazanov, and Sokolov (RANDOM 2023).

The singular value decomposition (SVD) allows to put a matrix as a product of three matrices: a matrix with the left singular vectors, a matrix with the positive-valued singular values and a matrix with the right singular vectors. There are two main approaches allowing to get the SVD result: the classical method and the randomized method. The analysis of the classical approach leads to accurate singular values. The randomized approach is especially used for high dimensional matrix and is based on the approximation accuracy without computing necessary all singular values. In this paper, the SVD computation is formalized as an optimization problem and a use of the gradient search algorithm. That results in a power method allowing to get all or the first largest singular values and their associated right vectors. In this iterative search, the accuracy on the singular values and the associated vector matrix depends on the user settings. Two applications of the SVD are the principal component analysis and the autoencoder used in the neural network models.

In this work, we show that the class of multivariate degree-$d$ polynomials mapping $\{0,1\}^{n}$ to any Abelian group $G$ is locally correctable with $\widetilde{O}_{d}((\log n)^{d})$ queries for up to a fraction of errors approaching half the minimum distance of the underlying code. In particular, this result holds even for polynomials over the reals or the rationals, special cases that were previously not known. Further, we show that they are locally list correctable up to a fraction of errors approaching the minimum distance of the code. These results build on and extend the prior work of the authors [ABPSS24] (STOC 2024) who considered the case of linear polynomials and gave analogous results. Low-degree polynomials over the Boolean cube $\{0,1\}^{n}$ arise naturally in Boolean circuit complexity and learning theory, and our work furthers the study of their coding-theoretic properties. Extending the results of [ABPSS24] from linear to higher-degree polynomials involves several new challenges and handling them gives us further insights into properties of low-degree polynomials over the Boolean cube. For local correction, we construct a set of points in the Boolean cube that lie between two exponentially close parallel hyperplanes and is moreover an interpolating set for degree-$d$ polynomials. To show that the class of degree-$d$ polynomials is list decodable up to the minimum distance, we stitch together results on anti-concentration of low-degree polynomials, the Sunflower lemma, and the Footprint bound for counting common zeroes of polynomials. Analyzing the local list corrector of [ABPSS24] for higher degree polynomials involves understanding random restrictions of non-zero degree-$d$ polynomials on a Hamming slice. In particular, we show that a simple random restriction process for reducing the dimension of the Boolean cube is a suitably good sampler for Hamming slices.

A $(\beta,\delta,\Delta)$-padded decomposition of an edge-weighted graph $G = (V,E,w)$ is a stochastic decomposition into clusters of diameter at most $\Delta$ such that for every vertex $v\in V$, the probability that $\rm{ball}_G(v,\gamma\Delta)$ is entirely contained in the cluster containing $v$ is at least $e^{-\beta\gamma}$ for every $\gamma \in [0,\delta]$. Padded decompositions have been studied for decades and have found numerous applications, including metric embedding, multicommodity flow-cut gap, multicut, and zero extension problems, to name a few. In these applications, parameter $\beta$, called the padding parameter, is the most important parameter since it decides either the distortion or the approximation ratios. For general graphs with $n$ vertices, $\beta = \Theta(\log n)$. Klein, Plotkin, and Rao showed that $K_r$-minor-free graphs have padding parameter $\beta = O(r^3)$, which is a significant improvement over general graphs when $r$ is a constant. A long-standing conjecture is to construct a padded decomposition for $K_r$-minor-free graphs with padding parameter $\beta = O(\log r)$. Despite decades of research, the best-known result is $\beta = O(r)$, even for graphs with treewidth at most $r$. In this work, we make significant progress toward the aforementioned conjecture by showing that graphs with treewidth $\rm{tw}$ admit a padded decomposition with padding parameter $O(\log \rm{tw})$, which is tight. As corollaries, we obtain an exponential improvement in dependency on treewidth in a host of algorithmic applications: $O(\sqrt{ \log n \cdot \log(\rm{tw})})$ flow-cut gap, max flow-min multicut ratio of $O(\log(\rm{tw}))$, an $O(\log(\rm{tw}))$ approximation for the 0-extension problem, an $\ell^{O(\log n)}_\infty$ embedding with distortion $O(\log \rm{tw})$, and an $O(\log \rm{tw})$ bound for integrality gap for the uniform sparsest cut.

The Kaczmarz method is a way to iteratively solve a linear system of equations $Ax = b$. One interprets the solution $x$ as the point where hyperplanes intersect and then iteratively projects an approximate solution onto these hyperplanes to get better and better approximations. We note a somewhat related idea: one could take two random hyperplanes and project one into the orthogonal complement of the other. This leads to a sequence of linear systems $A^{(k)} x = b^{(k)}$ which is fast to compute, preserves the original solution and whose small singular values grow like $\sigma_{\ell}(A^{(k)}) \sim \exp(k/n^2) \cdot \sigma_{\ell}(A)$.

Given a database of bit strings $A_1,\ldots,A_m\in \{0,1\}^n$, a fundamental data structure task is to estimate the distances between a given query $B\in \{0,1\}^n$ with all the strings in the database. In addition, one might further want to ensure the integrity of the database by releasing these distance statistics in a secure manner. In this work, we propose differentially private (DP) data structures for this type of tasks, with a focus on Hamming and edit distance. On top of the strong privacy guarantees, our data structures are also time- and space-efficient. In particular, our data structure is $\epsilon$-DP against any sequence of queries of arbitrary length, and for any query $B$ such that the maximum distance to any string in the database is at most $k$, we output $m$ distance estimates. Moreover, - For Hamming distance, our data structure answers any query in $\widetilde O(mk+n)$ time and each estimate deviates from the true distance by at most $\widetilde O(k/e^{\epsilon/\log k})$; - For edit distance, our data structure answers any query in $\widetilde O(mk^2+n)$ time and each estimate deviates from the true distance by at most $\widetilde O(k/e^{\epsilon/(\log k \log n)})$. For moderate $k$, both data structures support sublinear query operations. We obtain these results via a novel adaptation of the randomized response technique as a bit flipping procedure, applied to the sketched strings.

Consider a following NP-problem DOUBLE CLIQUE (abbr.: CLIQ$_{2}$): Given a natural number $k>2$ and a pair of two disjoint subgraphs of a fixed graph $G$ decide whether each subgraph in question contains a $k$-clique. I prove that CLIQ$_{2}$ can't be solved in polynomial time by a deterministic TM, which infers $\mathbf{P}\neq \mathbf{NP}$. This proof upgrades the well-known proof of polynomial unsolvability of the partial result with respect to analogous monotone problem CLIQUE (abbr.: CLIQ) as well as my previous presentation that used appropriate 3-value semantics. Note that problem CLIQ$_{2}$ is not monotone and appears more complex than just iterated CLIQ, as the required subgraphs are mutually dependent.

Accurate approximation of a real-valued function depends on two aspects of the available data: the density of inputs within the domain of interest and the variation of the outputs over that domain. There are few methods for assessing whether the density of inputs is \textit{sufficient} to identify the relevant variations in outputs -- i.e., the ``geometric scale'' of the function -- despite the fact that sampling density is closely tied to the success or failure of an approximation method. In this paper, we introduce a general purpose, computational approach to detecting the geometric scale of real-valued functions over a fixed domain using a deterministic interpolation technique from computational geometry. The algorithm is intended to work on scalar data in moderate dimensions (2-10). Our algorithm is based on the observation that a sequence of piecewise linear interpolants will converge to a continuous function at a quadratic rate (in $L^2$ norm) if and only if the data are sampled densely enough to distinguish the feature from noise (assuming sufficiently regular sampling). We present numerical experiments demonstrating how our method can identify feature scale, estimate uncertainty in feature scale, and assess the sampling density for fixed (i.e., static) datasets of input-output pairs. We include analytical results in support of our numerical findings and have released lightweight code that can be adapted for use in a variety of data science settings.

In submodular multiway partition (SUB-MP), the input is a non-negative submodular function $f:2^V \rightarrow \mathbb{R}_{\ge 0}$ given by an evaluation oracle along with $k$ terminals $t_1, t_2, \ldots, t_k\in V$. The goal is to find a partition $V_1, V_2, \ldots, V_k$ of $V$ with $t_i\in V_i$ for every $i\in [k]$ in order to minimize $\sum_{i=1}^k f(V_i)$. In this work, we focus on SUB-MP when the input function is monotone (termed MONO-SUB-MP). MONO-SUB-MP formulates partitioning problems over several interesting structures -- e.g., matrices, matroids, graphs, and hypergraphs. MONO-SUB-MP is NP-hard since the graph multiway cut problem can be cast as a special case. We investigate the approximability of MONO-SUB-MP: we show that it admits a $4/3$-approximation and does not admit a $(10/9-\epsilon)$-approximation for every constant $\epsilon>0$. Next, we study a special case of MONO-SUB-MP where the monotone submodular function of interest is the coverage function of an input graph, termed GRAPH-COVERAGE-MP. GRAPH-COVERAGE-MP is equivalent to the classic multiway cut problem for the purposes of exact optimization. We show that GRAPH-COVERAGE-MP admits a $1.125$-approximation and does not admit a $(1.00074-\epsilon)$-approximation for every constant $\epsilon>0$ assuming the Unique Games Conjecture. These results separate GRAPH-COVERAGE-MP from graph multiway cut in terms of approximability.

北京阿比特科技有限公司