亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

An infinite sequence of sets $\left\{B_{n}\right\}_{n\in\mathbb{N}}$ is said to be a heterochromatic sequence from an infinite sequence of families $\left\{ \mathcal{F}_{n} \right\}_{n \in \mathbb{N}}$, if there exists a strictly increasing sequence of natural numbers $\left\{ i_{n}\right\}_{n \in \mathbb{N}}$ such that for all $n \in \mathbb{N}$ we have $B_{n} \in \mathcal{F}_{i_{n}}$. In this paper, we have proved that if for each $n\in\mathbb{N}$, $\mathcal{F}_n$ is a family of {\em nicely shaped} convex sets in $\mathbb{R}^d$ such that each heterochromatic sequence $\left\{B_{n}\right\}_{n\in\mathbb{N}}$ from $\left\{ \mathcal{F}_{n} \right\}_{n \in \mathbb{N}}$ contains at least $k+2$ sets that can be pierced by a single $k$-flat ($k$-dimensional affine space) then all but finitely many families in $\left\{\mathcal{F}_{n}\right\}_{n\in \mathbb{N}}$ can be pierced by finitely many $k$-flats. This result can be considered as a {\em countably colorful} generalization of the $(\aleph_0, k+2)$-theorem proved by Keller and Perles (Symposium on Computational Geometry 2022). We have also established the tightness of our result by proving a number of no-go theorems.

相關內容

We show that for large enough $n$, the number of non-isomorphic pseudoline arrangements of order $n$ is greater than $2^{c\cdot n^2}$ for some constant $c > 0.2604$, improving the previous best bound of $c>0.2083$ by Dumitrescu and Mandal (2020). Arrangements of pseudolines (and in particular arrangements of lines) are important objects appearing in many forms in discrete and computational geometry. They have strong ties for example with oriented matroids, sorting networks and point configurations. Let $B_n$ be the number of non-isomorphic pseudoline arrangements of order $n$ and let $b_n := \log_2(B_n)$. The problem of estimating $b_n$ dates back to Knuth, who conjectured that $b_n \leq 0.5n^2 + o(n^2)$ and derived the first bounds $n^2/6-O(n) \leq b_n \leq 0.7924(n^2+n)$. Both the upper and the lower bound have been improved a couple of times since. For the upper bound, it was first improved to $b_n < 0.6988n^2$ (Felsner, 1997), then $b_n < 0.6571 n^2$ by Felsner and Valtr (2011), for large enough $n$. In the same paper, Felsner and Valtr improved the constant in the lower bound to $c> 0.1887$, which was subsequently improved by Dumitrescu and Mandal to $c>0.2083$. Our new bound is based on a construction which starts with one of the constructions of Dumitrescu and Mandal and breaks it into constant sized pieces. We then use software to compute the contribution of each piece to the overall number of pseudoline arrangements. This method adds a lot of flexibility to the construction and thus offers many avenues for future tweaks and improvements which could lead to further tightening of the lower bound.

In the Minmax Set Cover Reconfiguration problem, given a set system $\mathcal{F}$ over a universe and its two covers $\mathcal{C}^\mathsf{start}$ and $\mathcal{C}^\mathsf{goal}$ of size $k$, we wish to transform $\mathcal{C}^\mathsf{start}$ into $\mathcal{C}^\mathsf{goal}$ by repeatedly adding or removing a single set of $\mathcal{F}$ while covering the universe in any intermediate state. Then, the objective is to minimize the maximize size of any intermediate cover during transformation. We prove that Minmax Set Cover Reconfiguration and Minmax Dominating Set Reconfiguration are $\mathsf{PSPACE}$-hard to approximate within a factor of $2-\frac{1}{\operatorname{polyloglog} N}$, where $N$ is the size of the universe and the number of vertices in a graph, respectively, improving upon Ohsaka (SODA 2024) and Karthik C. S. and Manurangsi (2023). This is the first result that exhibits a sharp threshold for the approximation factor of any reconfiguration problem because both problems admit a $2$-factor approximation algorithm as per Ito, Demaine, Harvey, Papadimitriou, Sideri, Uehara, and Uno (Theor. Comput. Sci., 2011). Our proof is based on a reconfiguration analogue of the FGLSS reduction from Probabilistically Checkable Reconfiguration Proofs of Hirahara and Ohsaka (2024). We also prove that for any constant $\varepsilon \in (0,1)$, Minmax Hypergraph Vertex Cover Reconfiguration on $\operatorname{poly}(\varepsilon^{-1})$-uniform hypergraphs is $\mathsf{PSPACE}$-hard to approximate within a factor of $2-\varepsilon$.

Given a graph $G$ with $n$ nodes and two nodes $u,v\in G$, the {\em CoSimRank} value $s(u,v)$ quantifies the similarity between $u$ and $v$ based on graph topology. Compared to SimRank, CoSimRank is shown to be more accurate and effective in many real-world applications, including synonym expansion, lexicon extraction, and entity relatedness in knowledge graphs. The computation of all pairwise CoSimRanks in $G$ is highly expensive and challenging. Existing solutions all focus on devising approximate algorithms for the computation of all pairwise CoSimRanks. To attain a desired absolute accuracy guarantee $\epsilon$, the state-of-the-art approximate algorithm for computing all pairwise CoSimRanks requires $O(n^3\log_2(\ln(\frac{1}{\epsilon})))$ time, which is prohibitively expensive even though $\epsilon$ is large. In this paper, we propose \rsim, a fast randomized algorithm for computing all pairwise CoSimRank values. The basic idea of \rsim is to approximate the $n\times n$ matrix multiplications in CoSimRank computation via random projection. Theoretically, \rsim runs in $O(\frac{n^2\ln(n)}{\epsilon^2}\ln(\frac{1}{\epsilon}))$ time and meanwhile ensures an absolute error of at most $\epsilon$ in each CoSimRank value in $G$ with a high probability. Extensive experiments using six real graphs demonstrate that \rsim is more than orders of magnitude faster than the state of the art. In particular, on a million-edge Twitter graph, \rsim answers the $\epsilon$-approximate ($\epsilon=0.1$) all pairwise CoSimRank query within 4 hours, using a single commodity server, while existing solutions fail to terminate within a day.

The random geometric graph $\mathsf{RGG}(n,\mathbb{S}^{d-1}, p)$ is formed by sampling $n$ i.i.d. vectors $\{V_i\}_{i = 1}^n$ uniformly on $\mathbb{S}^{d-1}$ and placing an edge between pairs of vertices $i$ and $j$ for which $\langle V_i,V_j\rangle \ge \tau^p_d,$ where $\tau^p_d$ is such that the expected density is $p.$ We study the low-degree Fourier coefficients of the distribution $\mathsf{RGG}(n,\mathbb{S}^{d-1}, p)$ and its Gaussian analogue. Our main conceptual contribution is a novel two-step strategy for bounding Fourier coefficients which we believe is more widely applicable to studying latent space distributions. First, we localize the dependence among edges to few fragile edges. Second, we partition the space of latent vector configurations $(\mathsf{RGG}(n,\mathbb{S}^{d-1}, p))^{\otimes n}$ based on the set of fragile edges and on each subset of configurations, we define a noise operator acting independently on edges not incident (in an appropriate sense) to fragile edges. We apply the resulting bounds to: 1) Settle the low-degree polynomial complexity of distinguishing spherical and Gaussian random geometric graphs from Erdos-Renyi both in the case of observing a complete set of edges and in the non-adaptively chosen mask $\mathcal{M}$ model recently introduced by [MVW24]; 2) Exhibit a statistical-computational gap for distinguishing $\mathsf{RGG}$ and the planted coloring model [KVWX23] in a regime when $\mathsf{RGG}$ is distinguishable from Erdos-Renyi; 3) Reprove known bounds on the second eigenvalue of random geometric graphs.

A geometric $t$-spanner $\mathcal{G}$ on a set $S$ of $n$ point sites in a metric space $P$ is a subgraph of the complete graph on $S$ such that for every pair of sites $p,q$ the distance in $\mathcal{G}$ is a most $t$ times the distance $d(p,q)$ in $P$. We call a connection between two sites in the spanner a link. In some settings, such as when $P$ is a simple polygon with $m$ vertices and a link is a shortest path in $P$, links can consist of $\Theta (m)$ segments and thus have non-constant complexity. The total spanner complexity is a recently-introduced measure of how compact a spanner is. In this paper, we study what happens if we are allowed to introduce $k$ Steiner points to reduce the spanner complexity. We study such Steiner spanners in simple polygons, polygonal domains, and edge-weighted trees. Surprisingly, we show that Steiner points have only limited utility. For a spanner that uses $k$ Steiner points, we provide an $\Omega(nm/k)$ lower bound on the worst-case complexity of any $(3-\varepsilon)$-spanner, and an $\Omega(mn^{1/(t+1)}/k^{1/(t+1)})$ lower bound on the worst-case complexity of any $(t-\varepsilon)$-spanner, for any constant $\varepsilon\in (0,1)$ and integer constant $t \geq 2$. These lower bounds hold in all settings. Additionally, we show NP-hardness for the problem of deciding whether a set of sites in a polygonal domain admits a $3$-spanner with a given maximum complexity using $k$ Steiner points. On the positive side, for trees we show how to build a $2t$-spanner that uses $k$ Steiner points and of complexity $O(mn^{1/t}/k^{1/t} + n \log (n/k))$, for any integer $t \geq 1$. We generalize this result to forests, and apply it to obtain a $2\sqrt{2}t$-spanner in a simple polygon or a $6t$-spanner in a polygonal domain, with total complexity $O(mn^{1/t}(\log k)^{1+1/t}/k^{1/t} + n\log^2 n)$.

We show that, for every $k\geq 2$, $C_{2k}$-freeness can be decided in $O(n^{1-1/k})$ rounds in the \CONGEST{} model by a randomized Monte-Carlo distributed algorithm with one-sided error probability $1/3$. This matches the best round-complexities of previously known algorithms for $k\in\{2,3,4,5\}$ by Drucker et al. [PODC'14] and Censor-Hillel et al. [DISC'20], but improves the complexities of the known algorithms for $k>5$ by Eden et al. [DISC'19], which were essentially of the form $\tilde O(n^{1-2/k^2})$. Our algorithm uses colored BFS-explorations with threshold, but with an original \emph{global} approach that enables to overcome a recent impossibility result by Fraigniaud et al. [SIROCCO'23] about using colored BFS-exploration with \emph{local} threshold for detecting cycles. We also show how to quantize our algorithm for achieving a round-complexity $\tilde O(n^{\frac{1}{2}-\frac{1}{2k}})$ in the quantum setting for deciding $C_{2k}$ freeness. Furthermore, this allows us to improve the known quantum complexities of the simpler problem of detecting cycles of length \emph{at most}~$2k$ by van Apeldoorn and de Vos [PODC'22]. Our quantization is in two steps. First, the congestion of our randomized algorithm is reduced, to the cost of reducing its success probability too. Second, the success probability is boosted using a new quantum framework derived from sequential algorithms, namely Monte-Carlo quantum amplification.

In the Determinant Maximization problem, given an $n\times n$ positive semi-definite matrix $\bf{A}$ in $\mathbb{Q}^{n\times n}$ and an integer $k$, we are required to find a $k\times k$ principal submatrix of $\bf{A}$ having the maximum determinant. This problem is known to be NP-hard and further proven to be W[1]-hard with respect to $k$ by Koutis. However, there is still room to explore its parameterized complexity in the restricted case, in the hope of overcoming the general-case parameterized intractability. In this study, we rule out the fixed-parameter tractability of Determinant Maximization even if an input matrix is extremely sparse or low rank, or an approximate solution is acceptable. We first prove that Determinant Maximization is NP-hard and W[1]-hard even if an input matrix is an arrowhead matrix; i.e., the underlying graph formed by nonzero entries is a star, implying that the structural sparsity is not helpful. By contrast, Determinant Maximization is known to be solvable in polynomial time on tridiagonal matrices. Thereafter, we demonstrate the W[1]-hardness with respect to the rank $r$ of an input matrix. Our result is stronger than Koutis' result in the sense that any $k\times k$ principal submatrix is singular whenever $k>r$. We finally give evidence that it is W[1]-hard to approximate Determinant Maximization parameterized by $k$ within a factor of $2^{-c\sqrt{k}}$ for some universal constant $c>0$. Our hardness result is conditional on the Parameterized Inapproximability Hypothesis posed by Lokshtanov, Ramanujan, Saurab, and Zehavi, which asserts that a gap version of Binary Constraint Satisfaction Problem is W[1]-hard. To complement this result, we develop an $\varepsilon$-additive approximation algorithm that runs in $\varepsilon^{-r^2}\cdot r^{O(r^3)}\cdot n^{O(1)}$ time for the rank $r$ of an input matrix, provided that the diagonal entries are bounded.

In cut sparsification, all cuts of a hypergraph $H=(V,E,w)$ are approximated within $1\pm\epsilon$ factor by a small hypergraph $H'$. This widely applied method was generalized recently to a setting where the cost of cutting each hyperedge $e$ is provided by a splitting function $g_e: 2^e\to\mathbb{R}_+$. This generalization is called a submodular hypergraph when the functions $\{g_e\}_{e\in E}$ are submodular, and it arises in machine learning, combinatorial optimization, and algorithmic game theory. Previous work studied the setting where $H'$ is a reweighted sub-hypergraph of $H$, and measured the size of $H'$ by the number of hyperedges in it. In this setting, we present two results: (i) all submodular hypergraphs admit sparsifiers of size polynomial in $n=|V|$ and $\epsilon^{-1}$; (ii) we propose a new parameter, called spread, and use it to obtain smaller sparsifiers in some cases. We also show that for a natural family of splitting functions, relaxing the requirement that $H'$ be a reweighted sub-hypergraph of $H$ yields a substantially smaller encoding of the cuts of $H$ (almost a factor $n$ in the number of bits). This is in contrast to graphs, where the most succinct representation is attained by reweighted subgraphs. A new tool in our construction of succinct representation is the notion of deformation, where a splitting function $g_e$ is decomposed into a sum of functions of small description, and we provide upper and lower bounds for deformation of common splitting functions.

Given two sets $\mathit{R}$ and $\mathit{B}$ of at most $\mathit{n}$ points in the plane, we present efficient algorithms to find a two-line linear classifier that best separates the "red" points in $\mathit{R}$ from the "blue" points in $B$ and is robust to outliers. More precisely, we find a region $\mathit{W}_\mathit{B}$ bounded by two lines, so either a halfplane, strip, wedge, or double wedge, containing (most of) the blue points $\mathit{B}$, and few red points. Our running times vary between optimal $O(n\log n)$ and $O(n^4)$, depending on the type of region $\mathit{W}_\mathit{B}$ and whether we wish to minimize only red outliers, only blue outliers, or both.

We show that the border subrank of a sufficiently general tensor in $(\mathbb{C}^n)^{\otimes d}$ is $\mathcal{O}(n^{1/(d-1)})$ for $n \to \infty$. Since this matches the growth rate $\Theta(n^{1/(d-1)})$ for the generic (non-border) subrank recently established by Derksen-Makam-Zuiddam, we find that the generic border subrank has the same growth rate. In our proof, we use a generalisation of the Hilbert-Mumford criterion that we believe will be of independent interest.

北京阿比特科技有限公司