亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We investigate the parameterized complexity of several problems formalizing cluster identification in graphs. In other words we ask whether a graph contains a large enough and sufficiently connected subgraph. We study here three relaxations of CLIQUE: $s$-CLUB and $s$-CLIQUE, in which the relaxation is focused on the distances in respectively the cluster and the original graph, and $\gamma$-COMPLETE SUBGRAPH in which the relaxation is made on the minimal degree in the cluster. As these three problems are known to be NP-hard, we study here their parameterized complexities. We prove that $s$-CLUB and $s$-CLIQUE are NP-hard even restricted to graphs of degeneracy $\le 3$ whenever $s \ge 3$, and to graphs of degeneracy $\le 2$ whenever $s \ge 5$, which is a strictly stronger result than its W[1]-hardness parameterized by the degeneracy. We also obtain that these problems are solvable in polynomial time on graphs of degeneracy $1$. Concerning $\gamma$-COMPLETE SUBGRAPH, we prove that it is W[1]-hard parameterized by both the degeneracy, which implies the W[1]-hardness parameterized by the number of vertices in the $\gamma$-complete-subgraph, and the number of elements outside the $\gamma$-complete subgraph.

相關內容

At STOC 2002, Eiter, Gottlob, and Makino presented a technique called ordered generation that yields an $n^{O(d)}$-delay algorithm listing all minimal transversals of an $n$-vertex hypergraph of degeneracy $d$, for an appropriate definition of degeneracy. Recently at IWOCA 2019, Conte, Kant\'e, Marino, and Uno asked whether, even for a more restrictive notion of degeneracy, this XP-delay algorithm parameterized by $d$ could be made FPT-delay parameterized by $d$ and the maximum degree $\Delta$, i.e., an algorithm with delay $f(d,\Delta)\cdot n^{O(1)}$ for some computable function $f$. We answer this question in the affirmative whenever the hypergraph corresponds to the closed neighborhoods of a graph, i.e., we show that the intimately related problem of enumerating minimal dominating sets in graphs admits an FPT-delay algorithm parameterized by the degeneracy and the maximum degree.

In this paper we examine the use of low-rank approximations for the handling of radiation boundary conditions in a transient heat equation given a cavity radiation setting. The finite element discretization that arises from cavity radiation is well known to be dense, which poses difficulties for efficiency and scalability of solvers. Here we consider a special treatment of the cavity radiation discretization using a block low-rank approximation combined with hierarchical matrices. We provide an overview of the methodology and discusses techniques that can be used to improve efficiency within the framework of hierarchical matrices, including the usage of the approximate cross approximation (ACA) method. We provide a number of numerical results that demonstrate the accuracy and efficiency of the approach in practical problems, and demonstrate significant speedup and memory reduction compared to the more conventional "dense matrix" approach.

An integer vector $b \in \mathbb{Z}^d$ is a degree sequence if there exists a hypergraph with vertices $\{1,\dots,d\}$ such that each $b_i$ is the number of hyperedges containing $i$. The degree-sequence polytope $\mathscr{Z}^d$ is the convex hull of all degree sequences. We show that all but a $2^{-\Omega(d)}$ fraction of integer vectors in the degree sequence polytope are degree sequences. Furthermore, the corresponding hypergraph of these points can be computed in time $2^{O(d)}$ via linear programming techniques. This is substantially faster than the $2^{O(d^2)}$ running time of the current-best algorithm for the degree-sequence problem. We also show that for $d\geq 98$, the degree-sequence polytope $\mathscr{Z}^d$ contains integer points that are not degree sequences. Furthermore, we prove that the linear optimization problem over $\mathscr{Z}^d$ is $\mathrm{NP}$-hard. This complements a recent result of Deza et al. (2018) who provide an algorithm that is polynomial in $d$ and the number of hyperedges.

This paper provides a selective review of the statistical network analysis literature focused on clustering and inference problems for stochastic blockmodels and their variants. We survey asymptotic normality results for stochastic blockmodels as a means of thematically linking classical statistical concepts to contemporary research in network data analysis. Of note, multiple different forms of asymptotically Gaussian behavior arise in stochastic blockmodels and are useful for different purposes, pertaining to estimation and testing, the characterization of cluster structure in community detection, and understanding latent space geometry. This paper concludes with a discussion of open problems and ongoing research activities addressing asymptotic normality and its implications for statistical network modeling.

Let $G$ be a graph on $n$ vertices of maximum degree $\Delta$. We show that, for any $\delta > 0$, the down-up walk on independent sets of size $k \leq (1-\delta)\alpha_c(\Delta)n$ mixes in time $O_{\Delta,\delta}(k\log{n})$, thereby resolving a conjecture of Davies and Perkins in an optimal form. Here, $\alpha_{c}(\Delta)n$ is the NP-hardness threshold for the problem of counting independent sets of a given size in a graph on $n$ vertices of maximum degree $\Delta$. Our mixing time has optimal dependence on $k,n$ for the entire range of $k$; previously, even polynomial mixing was not known. In fact, for $k = \Omega_{\Delta}(n)$ in this range, we establish a log-Sobolev inequality with optimal constant $\Omega_{\Delta,\delta}(1/n)$. At the heart of our proof are three new ingredients, which may be of independent interest. The first is a method for lifting $\ell_\infty$-independence from a suitable distribution on the discrete cube -- in this case, the hard-core model -- to the slice by proving stability of an Edgeworth expansion using a multivariate zero-free region for the base distribution. The second is a generalization of the Lee-Yau induction to prove log-Sobolev inequalities for distributions on the slice with considerably less symmetry than the uniform distribution. The third is a sharp decomposition-type result which provides a lossless comparison between the Dirichlet form of the original Markov chain and that of the so-called projected chain in the presence of a contractive coupling.

We analyze the long-time behavior of numerical schemes, studied by \cite{LQ21} in a finite time horizon, for a class of monotone SPDEs driven by multiplicative noise. We derive several time-independent a priori estimates for both the exact and numerical solutions and establish time-independent strong error estimates between them. These uniform estimates, in combination with ergodic theory of Markov processes, are utilized to establish the exponential ergodicity of these numerical schemes with an invariant measure. Applying these results to the stochastic Allen--Cahn equation indicates that these numerical schemes always have at least one invariant measure, respectively, and converge strongly to the exact solution with sharp time-independent rates. We also show that the invariant measures of these schemes are also exponentially ergodic and thus give an affirmative answer to a question proposed in \cite{CHS21}, provided that the interface thickness is not too small.

We consider $t$-Lee-error-correcting codes of length $n$ over the residue ring $\mathbb{Z}_m := \mathbb{Z}/m\mathbb{Z}$ and determine upper and lower bounds on the number of $t$-Lee-error-correcting codes. We use two different methods, namely estimating isolated nodes on bipartite graphs and the graph container method. The former gives density results for codes of fixed size and the latter for any size. This confirms some recent density results for linear Lee metric codes and provides new density results for nonlinear codes. To apply a variant of the graph container algorithm we also investigate some geometrical properties of the balls in the Lee metric.

The page number of a directed acyclic graph $G$ is the minimum $k$ for which there is a topological ordering of $G$ and a $k$-coloring of the edges such that no two edges of the same color cross, i.e., have alternating endpoints along the topological ordering. We address the long-standing open problem asking for the largest page number among all upward planar graphs. We improve the best known lower bound to $5$ and present the first asymptotic improvement over the trivial $O(n)$ upper bound, where $n$ denotes the number of vertices in $G$. Specifically, we first prove that the page number of every upward planar graph is bounded in terms of its width, as well as its height. We then combine both approaches to show that every $n$-vertex upward planar graph has page number $O(n^{2/3} \log(n)^{2/3})$.

Inferring causal structures from time series data is the central interest of many scientific inquiries. A major barrier to such inference is the problem of subsampling, i.e., the frequency of measurements is much lower than that of causal influence. To overcome this problem, numerous model-based and model-free methods have been proposed, yet either limited to the linear case or failed to establish identifiability. In this work, we propose a model-free algorithm that can identify the entire causal structure from subsampled time series, without any parametric constraint. The idea is that the challenge of subsampling arises mainly from \emph{unobserved} time steps and therefore should be handled with tools designed for unobserved variables. Among these tools, we find the proxy variable approach particularly fits, in the sense that the proxy of an unobserved variable is naturally itself at the observed time step. Following this intuition, we establish comprehensive structural identifiability results. Our method is constraint-based and requires no more regularities than common continuity and differentiability. Theoretical advantages are reflected in experimental results.

Sorting is one of the most basic algorithms, and developing highly parallel sorting programs is becoming increasingly important in high-performance computing because the number of CPU cores per node in modern supercomputers tends to increase. In this study, we have implemented two multi-threaded sorting algorithms based on samplesort and compared their performance on the supercomputer Fugaku. The first algorithm divides an input sequence into multiple blocks, sorts each block, and then selects pivots by sampling from each block at regular intervals. Each block is then partitioned using the pivots, and partitions in different blocks are merged into a single sorted sequence. The second algorithm differs from the first one in only selecting pivots, where the binary search is used to select pivots such that the number of elements in each partition is equal. We compare the performance of the two algorithms with different sequential sorting and multiway merging algorithms. We demonstrate that the second algorithm with BlockQuicksort (a quicksort accelerated by reducing conditional branches) for sequential sorting and the selection tree for merging shows consistently high speed and high parallel efficiency for various input data types and data sizes.

北京阿比特科技有限公司