亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose an algorithm whose input are parameters $k$ and $r$ and a hypergraph $H$ of rank at most $r$. The algorithm either returns a tree decomposition of $H$ of generalized hypertree width at most $4k$ or 'NO'. In the latter case, it is guaranteed that the hypertree width of $H$ is greater than $k$. Most importantly, the runtime of the algorithm is \emph{FPT} in $k$ and $r$. The approach extends to fractional hypertree width with a slightly worse approximation ($4k+1$ instead of $4k$). We hope that the results of this paper will give rise to a new research direction whose aim is design of FPT algorithms for computation and approximation of hypertree width parameters for restricted classes of hypergraphs.

相關內容

FPT:International Conference on Field-Programmable Technology。 Explanation:現場可編程技術國際會議。 Publisher:IEEE。 SIT:

While multilinear algebra appears natural for studying the multiway interactions modeled by hypergraphs, tensor methods for general hypergraphs have been stymied by theoretical and practical barriers. A recently proposed adjacency tensor is applicable to nonuniform hypergraphs, but is prohibitively costly to form and analyze in practice. We develop tensor times same vector (TTSV) algorithms for this tensor which improve complexity from $O(n^r)$ to a low-degree polynomial in $r$, where $n$ is the number of vertices and $r$ is the maximum hyperedge size. Our algorithms are implicit, avoiding formation of the order $r$ adjacency tensor. We demonstrate the flexibility and utility of our approach in practice by developing tensor-based hypergraph centrality and clustering algorithms. We also show these tensor measures offer complementary information to analogous graph-reduction approaches on data, and are also able to detect higher-order structure that many existing matrix-based approaches provably cannot.

In this paper we investigate the existence of subexponential parameterized algorithms of three fundamental cycle-hitting problems in geometric graph classes. The considered problems, \textsc{Triangle Hitting} (TH), \textsc{Feedback Vertex Set} (FVS), and \textsc{Odd Cycle Transversal} (OCT) ask for the existence in a graph $G$ of a set $X$ of at most $k$ vertices such that $G-X$ is, respectively, triangle-free, acyclic, or bipartite. Such subexponential parameterized algorithms are known to exist in planar and even $H$-minor free graphs from bidimensionality theory [Demaine et al., JACM 2005], and there is a recent line of work lifting these results to geometric graph classes consisting of intersection of "fat" objects ([Grigoriev et al., FOCS 2022] and [Lokshtanov et al., SODA 2022]). In this paper we focus on "thin" objects by considering intersection graphs of segments in the plane with $d$ possible slopes ($d$-DIR graphs) and contact graphs of segments in the plane. Assuming the ETH, we rule out the existence of algorithms: - solving TH in time $2^{o(n)}$ in 2-DIR graphs; and - solving TH, FVS, and OCT in time $2^{o(\sqrt{n})}$ in $K_{2,2}$-free contact 2-DIR graphs. These results indicate that additional restrictions are necessary in order to obtain subexponential parameterized algorithms for %these problems. In this direction we provide: - a $2^{O(k^{3/4}\cdot \log k)}n^{O(1)}$-time algorithm for FVS in contact segment graphs; - a $2^{O(\sqrt d\cdot t^2 \log t\cdot k^{2/3}\log k)} n^{O(1)}$-time algorithm for TH in $K_{t,t}$-free $d$-DIR graphs; and - a $2^{O(k^{7/9}\log^{3/2}k)} n^{O(1)}$-time algorithm for TH in contact segment graphs.

The notion of $\mathcal{H}$-treewidth, where $\mathcal{H}$ is a hereditary graph class, was recently introduced as a generalization of the treewidth of an undirected graph. Roughly speaking, a graph of $\mathcal{H}$-treewidth at most $k$ can be decomposed into (arbitrarily large) $\mathcal{H}$-subgraphs which interact only through vertex sets of size $O(k)$ which can be organized in a tree-like fashion. $\mathcal{H}$-treewidth can be used as a hybrid parameterization to develop fixed-parameter tractable algorithms for $\mathcal{H}$-deletion problems, which ask to find a minimum vertex set whose removal from a given graph $G$ turns it into a member of $\mathcal{H}$. The bottleneck in the current parameterized algorithms lies in the computation of suitable tree $\mathcal{H}$-decompositions. We present FPT approximation algorithms to compute tree $\mathcal{H}$-decompositions for hereditary and union-closed graph classes $\mathcal{H}$. Given a graph of $\mathcal{H}$-treewidth $k$, we can compute a 5-approximate tree $\mathcal{H}$-decomposition in time $f(O(k)) \cdot n^{O(1)}$ whenever $\mathcal{H}$-deletion parameterized by solution size can be solved in time $f(k) \cdot n^{O(1)}$ for some function $f(k) \geq 2^k$. The current-best algorithms either achieve an approximation factor of $k^{O(1)}$ or construct optimal decompositions while suffering from non-uniformity with unknown parameter dependence. Using these decompositions, we obtain algorithms solving Odd Cycle Transversal in time $2^{O(k)} \cdot n^{O(1)}$ parameterized by $\mathsf{bipartite}$-treewidth and Vertex Planarization in time $2^{O(k \log k)} \cdot n^{O(1)}$ parameterized by $\mathsf{planar}$-treewidth, showing that these can be as fast as the solution-size parameterizations and giving the first ETH-tight algorithms for parameterizations by hybrid width measures.

We derive an explicit formula, valid for all integers $r,d\ge 0$, for the dimension of the vector space $C^r_d(\Delta)$ of piecewise polynomial functions continuously differentiable to order $r$ and whose constituents have degree at most $d$, where $\Delta$ is a planar triangulation that has a single totally interior edge. This extends previous results of Toh\v{a}neanu, Min\'{a}\v{c}, and Sorokina. Our result is a natural successor of Schumaker's 1979 dimension formula for splines on a planar vertex star. Indeed, there has not been a dimension formula in this level of generality (valid for all integers $d,r\ge 0$ and any vertex coordinates) since Schumaker's result. We derive our results using commutative algebra.

In this paper, we study the maximum clique problem on hyperbolic random graphs. A hyperbolic random graph is a mathematical model for analyzing scale-free networks since it effectively explains the power-law degree distribution of scale-free networks. We propose a simple algorithm for finding a maximum clique in hyperbolic random graph. We first analyze the running time of our algorithm theoretically. We can compute a maximum clique on a hyperbolic random graph $G$ in $O(m + n^{4.5(1-\alpha)})$ expected time if a geometric representation is given or in $O(m + n^{6(1-\alpha)})$ expected time if a geometric representation is not given, where $n$ and $m$ denote the numbers of vertices and edges of $G$, respectively, and $\alpha$ denotes a parameter controlling the power-law exponent of the degree distribution of $G$. Also, we implemented and evaluated our algorithm empirically. Our algorithm outperforms the previous algorithm [BFK18] practically and theoretically. Beyond the hyperbolic random graphs, we have experiment on real-world networks. For most of instances, we get large cliques close to the optimum solutions efficiently.

In PATH SET PACKING, the input is an undirected graph $G$, a collection $\calp$ of simple paths in $G$, and a positive integer $k$. The problem is to decide whether there exist $k$ edge-disjoint paths in $\calp$. We study the parameterized complexity of PATH SET PACKING with respect to both natural and structural parameters. We show that the problem is $W[1]$-hard with respect to vertex cover number, and $W[1]$-hard respect to pathwidth plus maximum degree plus solution size. These results answer an open question raised in COCOON 2018. On the positive side, we present an FPT algorithm parameterized by feedback vertex number plus maximum degree, and present an FPT algorithm parameterized by treewidth plus maximum degree plus maximum length of a path in $\calp$. These positive results complement the hardness of PATH SET PACKING with respect to any subset of the parameters used in the FPT algorithms. We also give a $4$-approximation algorithm for PATH SET PACKING which runs in FPT time when parameterized by feedback edge number.

Uniform sampling of bipartite graphs and hypergraphs with given degree sequences is necessary for building null models to statistically evaluate their topology. Because these graphs can be represented as binary matrices, the problem is equivalent to uniformly sampling $r \times c$ binary matrices with fixed row and column sums. The trade algorithm, which includes both the curveball and fastball implementations, is the state-of-the-art for performing such sampling. Its mixing time is currently unknown, although $5r$ is currently used as a heuristic. In this paper we propose a new distribution-based approach that not only provides an estimation of the mixing time, but also actually returns a sample of matrices that are guaranteed (within a user-chosen error tolerance) to be uniformly randomly sampled. In numerical experiments on matrices that vary by size, fill, and row and column sum distributions, we find that the upper bound on mixing time is at least $10r$, and that it increases as a function of both $c$ and the fraction of cells containing a 1.

This paper presents an intuitive application of multivariate kernel density estimation (KDE) for data correction. The method utilizes the expected value of the conditional probability density function (PDF) and a credible interval to quantify correction uncertainty. A selective KDE factor is proposed to adjust both kernel size and shape, determined through least-squares cross-validation (LSCV) or mean conditional squared error (MCSE) criteria. The selective bandwidth method can be used in combination with the adaptive method to potentially improve accuracy. Two examples, involving a hypothetical dataset and a realistic dataset, demonstrate the efficacy of the method. The selective bandwidth methods consistently outperform non-selective methods, while the adaptive bandwidth methods improve results for the hypothetical dataset but not for the realistic dataset. The MCSE criterion minimizes root mean square error but may yield under-smoothed distributions, whereas the LSCV criterion strikes a balance between PDF fitness and low RMSE.

In this paper, we consider the finite element approximation to a parabolic Dirichlet boundary control problem and establish new a priori error estimates. In the temporal semi-discretization we apply the DG(0) method for the state and the variational discretization for the control, and obtain the convergence rates $O(k^{\frac{1}{4}})$ and $O(k^{\frac{3}{4}-\varepsilon})$ $(\varepsilon>0)$ for the control for problems posed on polytopes with $y_0\in L^2(\Omega)$, $y_d\in L^2(I;L^2(\Omega))$ and smooth domains with $y_0\in H^{\frac{1}{2}}(\Omega)$, $y_d\in L^2(I;H^1(\Omega))\cap H^{\frac{1}{2}}(I;L^2(\Omega))$, respectively. In the fully discretization of the optimal control problem posed on polytopal domains, we apply the DG(0)-CG(1) method for the state and the variational discretization approach for the control, and derive the convergence order $O(k^{\frac{1}{4}} +h^{\frac{1}{2}})$, which improves the known results by removing the mesh size condition $k=O(h^2)$ between the space mesh size $h$ and the time step $k$. As a byproduct, we obtain a priori error estimate $O(h+k^{1\over 2})$ for the fully discretization of parabolic equations with inhomogeneous Dirichlet data posed on polytopes, which also improves the known error estimate by removing the above mesh size condition.

We prove the first polynomial separation between randomized and deterministic time-space tradeoffs of multi-output functions. In particular, we present a total function that on the input of $n$ elements in $[n]$, outputs $O(n)$ elements, such that: (1) There exists a randomized oblivious algorithm with space $O(\log n)$, time $O(n\log n)$ and one-way access to randomness, that computes the function with probability $1-O(1/n)$; (2) Any deterministic oblivious branching program with space $S$ and time $T$ that computes the function must satisfy $T^2S\geq\Omega(n^{2.5}/\log n)$. This implies that logspace randomized algorithms for multi-output functions cannot be black-box derandomized without an $\widetilde{\Omega}(n^{1/4})$ overhead in time. Since previously all the polynomial time-space tradeoffs of multi-output functions are proved via the Borodin-Cook method, which is a probabilistic method that inherently gives the same lower bound for randomized and deterministic branching programs, our lower bound proof is intrinsically different from previous works. We also examine other natural candidates for proving such separations, and show that any polynomial separation for these problems would resolve the long-standing open problem of proving $n^{1+\Omega(1)}$ time lower bound for decision problems with $\mathrm{polylog}(n)$ space.

北京阿比特科技有限公司