亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We revisit Hopcroft's problem and related fundamental problems about geometric range searching. Given $n$ points and $n$ lines in the plane, we show how to count the number of point-line incidence pairs or the number of point-above-line pairs in $O(n^{4/3})$ time, which matches the conjectured lower bound and improves the best previous time bound of $n^{4/3}2^{O(\log^*n)}$ obtained almost 30 years ago by Matou\v{s}ek. We describe two interesting and different ways to achieve the result: the first is randomized and uses a new 2D version of fractional cascading for arrangements of lines; the second is deterministic and uses decision trees in a manner inspired by the sorting technique of Fredman (1976). The second approach extends to any constant dimension. Many consequences follow from these new ideas: for example, we obtain an $O(n^{4/3})$-time algorithm for line segment intersection counting in the plane, $O(n^{4/3})$-time randomized algorithms for bichromatic closest pair and Euclidean minimum spanning tree in three or four dimensions, and a randomized data structure for halfplane range counting in the plane with $O(n^{4/3})$ preprocessing time and space and $O(n^{1/3})$ query time.

相關內容

Tangent and normal cones play an important role in constrained optimization to describe admissible search directions and, in particular, to formulate optimality conditions. They notably appear in various recent algorithms for both smooth and nonsmooth low-rank optimization where the feasible set is the set $\mathbb{R}_{\leq r}^{m \times n}$ of all $m \times n$ real matrices of rank at most $r$. In this paper, motivated by the convergence analysis of such algorithms, we study, by computing inner and outer limits, the continuity of the correspondence that maps each $X \in \mathbb{R}_{\leq r}^{m \times n}$ to the tangent cone to $\mathbb{R}_{\leq r}^{m \times n}$ at $X$. We also deduce results about the continuity of the corresponding normal cone correspondence. Finally, we show that our results include as a particular case the $a$-regularity of the Whitney stratification of $\mathbb{R}_{\leq r}^{m \times n}$ following from the fact that this set is a real algebraic variety, called the real determinantal variety.

In the $(1+\varepsilon,r)$-approximate near-neighbor problem for curves (ANNC) under some distance measure $\delta$, the goal is to construct a data structure for a given set $\mathcal{C}$ of curves that supports approximate near-neighbor queries: Given a query curve $Q$, if there exists a curve $C\in\mathcal{C}$ such that $\delta(Q,C)\le r$, then return a curve $C'\in\mathcal{C}$ with $\delta(Q,C')\le(1+\varepsilon)r$. There exists an efficient reduction from the $(1+\varepsilon)$-approximate nearest-neighbor problem to ANNC, where in the former problem the answer to a query is a curve $C\in\mathcal{C}$ with $\delta(Q,C)\le(1+\varepsilon)\cdot\delta(Q,C^*)$, where $C^*$ is the curve of $\mathcal{C}$ closest to $Q$. Given a set $\mathcal{C}$ of $n$ curves, each consisting of $m$ points in $d$ dimensions, we construct a data structure for ANNC that uses $n\cdot O(\frac{1}{\varepsilon})^{md}$ storage space and has $O(md)$ query time (for a query curve of length $m$), where the similarity between two curves is their discrete Fr\'echet or dynamic time warping distance. Our method is simple to implement, deterministic, and results in an exponential improvement in both query time and storage space compared to all previous bounds. Further, we also consider the asymmetric version of ANNC, where the length of the query curves is $k \ll m$, and obtain essentially the same storage and query bounds as above, except that $m$ is replaced by $k$. Finally, we apply our method to a version of approximate range counting for curves and achieve similar bounds.

We introduce a natural knapsack intersection hierarchy for strengthening linear programming relaxations of packing integer programs, i.e., $\max\{w^Tx:x\in P\cap\{0,1\}^n\}$ where $P=\{x\in[0,1]^n:Ax \leq b\}$ and $A,b,w\ge0$. The $t^{th}$ level $P^{t}$ corresponds to adding cuts associated with the integer hull of the intersection of any $t$ knapsack constraints (rows of the constraint matrix). This model captures the maximum possible strength of "$t$-row cuts", an approach often used by solvers for small $t$. If $A$ is $m \times n$, then $P^m$ is the integer hull of $P$ and $P^1$ corresponds to adding cuts for each associated single-row knapsack problem. Thus, even separating over $P^1$ is NP-hard. However, for fixed $t$ and any $\epsilon>0$, results of Pritchard imply there is a polytime $(1+\epsilon)$-approximation for $P^{t}$. We then investigate the hierarchy's strength in the context of the well-studied all-or-nothing flow problem in trees (also called unsplittable flow on trees). For this problem, we show that the integrality gap of $P^t$ is $O(n/t)$ and give examples where the gap is $\Omega(n/t)$. We then examine the stronger formulation $P_{\text{rank}}$ where all rank constraints are added. For $P_{\text{rank}}^t$, our best lower bound drops to $\Omega(1/c)$ at level $t=n^c$ for any $c>0$. Moreover, on a well-known class of "bad instances" due to Friggstad and Gao, we show that we can achieve this gap; hence a constant integrality gap for these instances is obtained at level $n^c$.

In this paper we apply methods originated in Complexity theory to some problems of Approximation. We notice that the construction of Alman and Williams that disproves the rigidity of Walsh-Hadamard matrices, provides good $\ell_p$-approximation for $p<2$. It follows that the first $n$ functions of Walsh system can be approximated with an error $n^{-\delta}$ by a linear space of dimension $n^{1-\delta}$: $$ d_{n^{1-\delta}}(\{w_1,\ldots,w_n\}, L_p[0,1]) \le n^{-\delta},\quad p\in[1,2),\;\delta=\delta(p)>0. $$ We do not know if this is possible for the trigonometric system. We show that the algebraic method of Alon--Frankl--R\"odl for bounding the number of low-signum-rank matrices, works for tensors: almost all signum-tensors have large signum-rank and can't be $\ell_1$-approximated by low-rank tensors. This implies lower bounds for $\Theta_m$~ -- the error of $m$-term approximation of multivariate functions by sums of tensor products $u^1(x_1)\cdots u^d(x_d)$. In particular, for the set of trigonometric polynomials with spectrum in $\prod_{j=1}^d[-n_j,n_j]$ and of norm $\|t\|_\infty\le 1$ we have $$ \Theta_m(\mathcal T(n_1,\ldots,n_d)_\infty,L_1[-\pi,\pi]^d) \ge c_1(d)>0,\quad m\le c_2(d)\frac{\prod n_j}{\max\{n_j\}}. $$ Sharp bounds follow for classes of dominated mixed smoothness: $$ \Theta_m(W^{(r,r,\ldots,r)}_p,L_q[0,1]^d)\asymp m^{-\frac{rd}{d-1}},\quad\mbox 2\le p\le\infty,\; 1\le q\le 2. $$

Metric $k$-center clustering is a fundamental unsupervised learning primitive. Although widely used, this primitive is heavily affected by noise in the data, so that a more sensible variant seeks for the best solution that disregards a given number $z$ of points of the dataset, called outliers. We provide efficient algorithms for this important variant in the streaming model under the sliding window setting, where, at each time step, the dataset to be clustered is the window $W$ of the most recent data items. Our algorithms achieve $O(1)$ approximation and, remarkably, require a working memory linear in $k+z$ and only logarithmic in $|W|$. As a by-product, we show how to estimate the effective diameter of the window $W$, which is a measure of the spread of the window points, disregarding a given fraction of noisy distances. We also provide experimental evidence of the practical viability of our theoretical results.

In this paper, we study arbitrary regular factorial languages over a finite alphabet $\Sigma$. For the set of words $L(n)$ of the length $n$ belonging to a regular factorial language $L$, we investigate the depth of decision trees solving the recognition and the membership problems deterministically and nondeterministically. In the case of recognition problem, for a given word from $L(n)$, we should recognize it using queries each of which, for some $ i\in \{1,\ldots ,n\}$, returns the $i$th letter of the word. In the case of membership problem, for a given word over the alphabet $\Sigma$ of the length $n$, we should recognize if it belongs to the set $L(n)$ using the same queries. For a given problem and type of trees, instead of the minimum depth $h(n)$ of a decision tree of the considered type solving the problem for $L(n)$, we study the smoothed minimum depth $H(n)=\max\{h(m):m\le n\}$. With the growth of $n$, the smoothed minimum depth of decision trees solving the problem of recognition deterministically is either bounded from above by a constant, or grows as a logarithm, or linearly. For other cases (decision trees solving the problem of recognition nondeterministically, and decision trees solving the membership problem deterministically and nondeterministically), with the growth of $n$, the smoothed minimum depth of decision trees is either bounded from above by a constant or grows linearly. As corollaries of the obtained results, we study joint behavior of smoothed minimum depths of decision trees for the considered four cases and describe five complexity classes of regular factorial languages. We also investigate the class of regular factorial languages over the alphabet $\{0,1\}$ each of which is given by one forbidden word.

Given a polyline on $n$ vertices, the polyline simplification problem asks for a minimum size subsequence of these vertices defining a new polyline whose distance to the original polyline is at most a given threshold under some distance measure. In this paper, we improve the long-standing running time bound for the simplification of polylines under the local Fr\'echet distance. The best algorithm known so far is by Imai and Iri and has a cubic running time in $n$. We present an algorithm with a running time of $O(n^2)$ under the $L_1$ and $L_\infty$ norm, and $O(n^2 \log n)$ under $L_{p \in (1,\infty)}$ (including the Euclidean norm $L_2$). Our approach is based on the ideas of Chan and Chin, who showed that under the local Hausdorff distance, the Imai-Iri algorithm can be improved to run in quadratic time for $L_1$, $L_2$, and $L_\infty$. However, the Hausdorff distance does not take the order of the points along the polyline into account. The Fr\'echet distance, which is sensitive to the course of the polylines, is hence often deemed the superior distance measure for polyline similarity but it also more intricate to compute. So far, the significantly faster simplification algorithms for the Hausdorff distance made them preferable for practical application. But our new algorithm for simplification under the Fr\'echet distance matches the running time bounds for the Hausdorff distance up to logarithmic factors and thus allows the usage of this more suitable distance measure.

The suffix array $SA[1..n]$ of a text $T$ of length $n$ is a permutation of $\{1,\ldots,n\}$ describing the lexicographical ordering of suffixes of $T$, and it is considered to be among of the most important data structures in string algorithms, with dozens of applications in data compression, bioinformatics, and information retrieval. One of the biggest drawbacks of the suffix array is that it is very difficult to maintain under text updates: even a single character substitution can completely change the contents of the suffix array. Thus, the suffix array of a dynamic text is modelled using suffix array queries, which return the value $SA[i]$ given any $i\in[1..n]$. Prior to this work, the fastest dynamic suffix array implementations were by Amir and Boneh. At ISAAC 2020, they showed how to answer suffix array queries in $\tilde{O}(k)$ time, where $k\in[1..n]$ is a trade-off parameter, with $\tilde{O}(\frac{n}{k})$-time text updates. In a very recent preprint [2021], they also provided a solution with $O(\log^5 n)$-time queries and $\tilde{O}(n^{2/3})$-time updates. We propose the first data structure that supports both suffix array queries and text updates in $O({\rm polylog}\,n)$ time (achieving $O(\log^4 n)$ and $O(\log^{3+o(1)} n)$ time, respectively). Our data structure is deterministic and the running times for all operations are worst-case. In addition to the standard single-character edits (character insertions, deletions, and substitutions), we support (also in $O(\log^{3+o(1)} n)$ time) the "cut-paste" operation that moves any (arbitrarily long) substring of $T$ to any place in $T$. We complement our structure by a hardness result: unless the Online Matrix-Vector Multiplication (OMv) Conjecture fails, no data structure with $O({\rm polylog}\,n)$-time suffix array queries can support the "copy-paste" operation in $O(n^{1-\epsilon})$ time for any $\epsilon>0$.

Many-user MAC is an important model for understanding energy efficiency of massive random access in 5G and beyond. Introduced in Polyanskiy'2017 for the AWGN channel, subsequent works have provided improved bounds on the asymptotic minimum energy-per-bit required to achieve a target per-user error at a given user density and payload, going beyond the AWGN setting. The best known rigorous bounds use spatially coupled codes along with the optimal AMP algorithm. But these bounds are infeasible to compute beyond a few (around 10) bits of payload. In this paper, we provide new achievability bounds for the many-user AWGN and quasi-static Rayleigh fading MACs using the spatially coupled codebook design along with a scalar AMP algorithm. The obtained bounds are computable even up to 100 bits and outperform the previous ones at this payload.

Many problems on signal processing reduce to nonparametric function estimation. We propose a new methodology, piecewise convex fitting (PCF), and give a two-stage adaptive estimate. In the first stage, the number and location of the change points is estimated using strong smoothing. In the second stage, a constrained smoothing spline fit is performed with the smoothing level chosen to minimize the MSE. The imposed constraint is that a single change point occurs in a region about each empirical change point of the first-stage estimate. This constraint is equivalent to requiring that the third derivative of the second-stage estimate has a single sign in a small neighborhood about each first-stage change point. We sketch how PCF may be applied to signal recovery, instantaneous frequency estimation, surface reconstruction, image segmentation, spectral estimation and multivariate adaptive regression.

北京阿比特科技有限公司