亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A Boolean constraint satisfaction problem (CSP), $\textsf{Max-CSP}(f)$, is a maximization problem specified by a constraint $f:\{-1,1\}^k\to\{0,1\}$. An instance of the problem consists of $m$ constraint applications on $n$ Boolean variables, where each constraint application applies the constraint to $k$ literals chosen from the $n$ variables and their negations. The goal is to compute the maximum number of constraints that can be satisfied by a Boolean assignment to the $n$~variables. In the $(\gamma,\beta)$-approximation version of the problem for parameters $\gamma \geq \beta \in [0,1]$, the goal is to distinguish instances where at least $\gamma$ fraction of the constraints can be satisfied from instances where at most $\beta$ fraction of the constraints can be satisfied. In this work we consider the approximability of $\textsf{Max-CSP}(f)$ in the (dynamic) streaming setting, where constraints are inserted (and may also be deleted in the dynamic setting) one at a time. We completely characterize the approximability of all Boolean CSPs in the dynamic streaming setting. Specifically, given $f$, $\gamma$ and $\beta$ we show that either (1) the $(\gamma,\beta)$-approximation version of $\textsf{Max-CSP}(f)$ has a probabilistic dynamic streaming algorithm using $O(\log n)$ space, or (2) for every $\epsilon > 0$ the $(\gamma-\epsilon,\beta+\epsilon)$-approximation version of $\textsf{Max-CSP}(f)$ requires $\Omega(\sqrt{n})$ space for probabilistic dynamic streaming algorithms. We also extend previously known results in the insertion-only setting to a wide variety of cases, and in particular the case of $k=2$ where we get a dichotomy and the case when the satisfying assignments of $f$ support a distribution on $\{-1,1\}^k$ with uniform marginals.

相關內容

We present a new generalization of the bin covering problem that is known to be a strongly NP-hard problem. In our generalization there is a positive constant $\Delta$, and we are given a set of items each of which has a positive size. We would like to find a partition of the items into bins. We say that a bin is near exact covered if the total size of items packed into the bin is between $1$ and $1+\Delta$. Our goal is to maximize the number of near exact covered bins. If $\Delta=0$ or $\Delta>0$ is given as part of the input, our problem is shown here to have no approximation algorithm with a bounded asymptotic approximation ratio (assuming that $P\neq NP$). However, for the case where $\Delta>0$ is seen as a constant, we present an asymptotic fully polynomial time approximation scheme (AFPTAS) that is our main contribution.

A {\em universal 1-bit compressive sensing (CS)} scheme consists of a measurement matrix $A$ such that for all signals $x$ belonging to a particular class, $x$ can be approximately recovered from $\textrm{sign}(Ax)$. 1-bit CS models extreme quantization effects where only one bit of information is revealed per measurement. We focus on universal support recovery for 1-bit CS in the case of {\em sparse} signals with bounded {\em dynamic range}. Specifically, a vector $x \in \mathbb{R}^n$ is said to have sparsity $k$ if it has at most $k$ nonzero entries, and dynamic range $R$ if the ratio between its largest and smallest nonzero entries is at most $R$ in magnitude. Our main result shows that if the entries of the measurement matrix $A$ are i.i.d.~Gaussians, then under mild assumptions on the scaling of $k$ and $R$, the number of measurements needs to be $\tilde{\Omega}(Rk^{3/2})$ to recover the support of $k$-sparse signals with dynamic range $R$ using $1$-bit CS. In addition, we show that a near-matching $O(R k^{3/2} \log n)$ upper bound follows as a simple corollary of known results. The $k^{3/2}$ scaling contrasts with the known lower bound of $\tilde{\Omega}(k^2 \log n)$ for the number of measurements to recover the support of arbitrary $k$-sparse signals.

We introduce a notion of \emph{generic local algorithm} which strictly generalizes existing frameworks of local algorithms such as \emph{factors of i.i.d.} by capturing local \emph{quantum} algorithms such as the Quantum Approximate Optimization Algorithm (QAOA). Motivated by a question of Farhi et al. [arXiv:1910.08187, 2019] we then show limitations of generic local algorithms including QAOA on random instances of constraint satisfaction problems (CSPs). Specifically, we show that any generic local algorithm whose assignment to a vertex depends only on a local neighborhood with $o(n)$ other vertices (such as the QAOA at depth less than $\epsilon\log(n)$) cannot arbitrarily-well approximate boolean CSPs if the problem satisfies a geometric property from statistical physics called the coupled overlap-gap property (OGP) [Chen et al., Annals of Probability, 47(3), 2019]. We show that the random MAX-k-XOR problem has this property when $k\geq4$ is even by extending the corresponding result for diluted $k$-spin glasses. Our concentration lemmas confirm a conjecture of Brandao et al. [arXiv:1812.04170, 2018] asserting that the landscape independence of QAOA extends to logarithmic depth -- in other words, for every fixed choice of QAOA angle parameters, the algorithm at logarithmic depth performs almost equally well on almost all instances. One of these concentration lemmas is a strengthening of McDiarmid's inequality, applicable when the random variables have a highly biased distribution, and may be of independent interest.

It is known that each word of length $n$ contains at most $n+1$ distinct palindromes. A finite rich word is a word with maximal number of palindromic factors. The definition of palindromic richness can be naturally extended to infinite words. Sturmian words and Rote complementary symmetric sequences form two classes of binary rich words, while episturmian words and words coding symmetric $d$-interval exchange transformations give us other examples on larger alphabets. In this paper we look for morphisms of the free monoid, which allow us to construct new rich words from already known rich words. We focus on morphisms in Class $P_{ret}$. This class contains morphisms injective on the alphabet and satisfying a particular palindromicity property: for every morphism $\varphi$ in the class there exists a palindrome $w$ such that $\varphi(a)w$ is a first complete return word to $w$ for each letter $a$. We characterize $P_{ret}$ morphisms which preserve richness over a binary alphabet. We also study marked $P_{ret}$ morphisms acting on alphabets with more letters. In particular we show that every Arnoux-Rauzy morphism is conjugated to a morphism in Class $P_{ret}$ and that it preserves richness.

In order to determine the sparse approximation function which has a direct metric relationship with the $\ell_{0}$ quasi-norm, we introduce a wonderful triangle whose sides are composed of $\Vert \mathbf{x} \Vert_{0}$, $\Vert \mathbf{x} \Vert_{1}$ and $\Vert \mathbf{x} \Vert_{\infty}$ for any non-zero vector $\mathbf{x} \in \mathbb{R}^{n}$ by delving into the iterative soft-thresholding operator in this paper. Based on this triangle, we deduce the ratio $\ell_{1}$ and $\ell_{\infty}$ norms as a sparsity-promoting objective function for sparse signal reconstruction and also try to give the sparsity interval of the signal. Considering the $\ell_{1}/\ell_{\infty}$ minimization from a angle $\beta$ of the triangle corresponding to the side whose length is $\Vert \mathbf{x} \Vert_{\infty} - \Vert \mathbf{x} \Vert_{1}/\Vert \mathbf{x} \Vert_{0}$, we finally demonstrate the performance of existing $\ell_{1}/\ell_{\infty}$ algorithm by comparing it with $\ell_{1}/\ell_{2}$ algorithm.

We consider the following oblivious sketching problem: given $\epsilon \in (0,1/3)$ and $n \geq d/\epsilon^2$, design a distribution $\mathcal{D}$ over $\mathbb{R}^{k \times nd}$ and a function $f: \mathbb{R}^k \times \mathbb{R}^{nd} \rightarrow \mathbb{R}$, so that for any $n \times d$ matrix $A$, $$\Pr_{S \sim \mathcal{D}} [(1-\epsilon) \|A\|_{op} \leq f(S(A),S) \leq (1+\epsilon)\|A\|_{op}] \geq 2/3,$$ where $\|A\|_{op}$ is the operator norm of $A$ and $S(A)$ denotes $S \cdot A$, interpreting $A$ as a vector in $\mathbb{R}^{nd}$. We show a tight lower bound of $k = \Omega(d^2/\epsilon^2)$ for this problem. Our result considerably strengthens the result of Nelson and Nguyen (ICALP, 2014), as it (1) applies only to estimating the operator norm, which can be estimated given any OSE, and (2) applies to distributions over general linear operators $S$ which treat $A$ as a vector and compute $S(A)$, rather than the restricted class of linear operators corresponding to matrix multiplication. Our technique also implies the first tight bounds for approximating the Schatten $p$-norm for even integers $p$ via general linear sketches, improving the previous lower bound from $k = \Omega(n^{2-6/p})$ [Regev, 2014] to $k = \Omega(n^{2-4/p})$. Importantly, for sketching the operator norm up to a factor of $\alpha$, where $\alpha - 1 = \Omega(1)$, we obtain a tight $k = \Omega(n^2/\alpha^4)$ bound, matching the upper bound of Andoni and Nguyen (SODA, 2013), and improving the previous $k = \Omega(n^2/\alpha^6)$ lower bound. Finally, we also obtain the first lower bounds for approximating Ky Fan norms.

Let $n>m$, and let $A$ be an $(m\times n)$-matrix of full rank. Then obviously the estimate $\|Ax\|\leq\|A\|\|x\|$ holds for the euclidean norm of $x$ and $Ax$ and the spectral norm as the assigned matrix norm. We study the sets of all $x$ for which, for fixed $\delta<1$, conversely $\|Ax\|\geq\delta\,\|A\|\|x\|$ holds. It turns out that these sets fill, in the high-dimensional case, almost the complete space once $\delta$ falls below a bound that depends on the extremal singular values of $A$ and on the ratio of the dimensions. This effect has much to do with the random projection theorem, which plays an important role in the data sciences. As a byproduct, we calculate the probabilities this theorem deals with exactly.

Given an unknown $n \times n$ matrix $A$ having non-negative entries, the \emph{inner product} (IP) oracle takes as inputs a specified row (or a column) of $A$ and a vector $v \in \mathbb{R}^{n}$, and returns their inner product. A derivative of IP is the induced degree query in an unknown graph $G=(V(G), E(G))$ that takes a vertex $u \in V(G)$ and a subset $S \subseteq V(G)$ as input and reports the number of neighbors of $u$ that are present in $S$. The goal of this paper is to understand the strength of the inner product oracle. Our results in that direction are as follows: (I) IP oracle can solve bilinear form estimation, i.e., estimate the value of ${\bf x}^{T}A\bf{y}$ given two vectors ${\bf x},\, {\bf y} \in \mathbb{R}^{n}$ with non-negative entries and can sample almost uniformly entries of a matrix with non-negative entries; (ii) We tackle for the first time weighted edge estimation and weighted sampling of edges that follow as an application to the bilinear form estimation and almost uniform sampling problems, respectively; (iii) induced degree query, a derivative of IP can solve edge estimation and an almost uniform edge sampling in induced subgraphs. To the best of our knowledge, these are the first set of Oracle-based query complexity results for induced subgraphs. We show that IP/induced degree queries over the whole graph can simulate local queries in any induced subgraph; (iv) Apart from the above, we also show that IP can solve several problems related to matrix, like testing if the matrix is diagonal, symmetric, doubly stochastic, etc.

Consider a matrix $\mathbf{F} \in \mathbb{K}^{m \times n}$ of univariate polynomials over a field~$\mathbb{K}$. We study the problem of computing the column rank profile of $\mathbf{F}$. To this end we first give an algorithm which improves the minimal kernel basis algorithm of Zhou, Labahn, and Storjohann (Proceedings ISSAC 2012). We then provide a second algorithm which computes the column rank profile of $\mathbf{F}$ with a rank-sensitive complexity of $O\tilde{~}(r^{\omega-2} n (m+D))$ operations in $\mathbb{K}$. Here, $D$ is the sum of row degrees of $\mathbf{F}$, $\omega$ is the exponent of matrix multiplication, and $O\tilde{~}(\cdot)$ hides logarithmic factors.

In this paper we consider a class of unfitted finite element methods for scalar elliptic problems. These so-called CutFEM methods use standard finite element spaces on a fixed unfitted triangulation combined with the Nitsche technique and a ghost penalty stabilization. As a model problem we consider the application of such a method to the Poisson interface problem. We introduce and analyze a new class of preconditioners that is based on a subspace decomposition approach. The unfitted finite element space is split into two subspaces, where one subspace is the standard finite element space associated to the background mesh and the second subspace is spanned by all cut basis functions corresponding to nodes on the cut elements. We will show that this splitting is stable, uniformly in the discretization parameter and in the location of the interface in the triangulation. Based on this we introduce an efficient preconditioner that is uniformly spectrally equivalent to the stiffness matrix. Using a similar splitting, it is shown that the same preconditioning approach can also be applied to a fictitious domain CutFEM discretization of the Poisson equation. Results of numerical experiments are included that illustrate optimality of such preconditioners for the Poisson interface problem and the Poisson fictitious domain problem.

北京阿比特科技有限公司