亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A set $S\subseteq V$ of vertices of a graph $G$ is a \emph{$c$-clustered set} if it induces a subgraph with components of order at most $c$ each, and $\alpha_c(G)$ denotes the size of a largest $c$-clustered set. For any graph $G$ on $n$ vertices and treewidth $k$, we show that $\alpha_c(G) \geq \frac{c}{c+k+1}n$, which improves a result of Wood [arXiv:2208.10074, August 2022], while we construct $n$-vertex graphs $G$ of treewidth~$k$ with $\alpha_c(G)\leq \frac{c}{c+k}n$. In the case $c\leq 2$ or $k=1$ we prove the better lower bound $\alpha_c(G) \geq \frac{c}{c+k}n$, which settles a conjecture of Chappell and Pelsmajer [Electron.\ J.\ Comb., 2013] and is best-possible. Finally, in the case $c=3$ and $k=2$, we show $\alpha_c(G) \geq \frac{5}{9}n$ and which is best-possible.

相關內容

The notion of $\mathcal{H}$-treewidth, where $\mathcal{H}$ is a hereditary graph class, was recently introduced as a generalization of the treewidth of an undirected graph. Roughly speaking, a graph of $\mathcal{H}$-treewidth at most $k$ can be decomposed into (arbitrarily large) $\mathcal{H}$-subgraphs which interact only through vertex sets of size $O(k)$ which can be organized in a tree-like fashion. $\mathcal{H}$-treewidth can be used as a hybrid parameterization to develop fixed-parameter tractable algorithms for $\mathcal{H}$-deletion problems, which ask to find a minimum vertex set whose removal from a given graph $G$ turns it into a member of $\mathcal{H}$. The bottleneck in the current parameterized algorithms lies in the computation of suitable tree $\mathcal{H}$-decompositions. We present FPT approximation algorithms to compute tree $\mathcal{H}$-decompositions for hereditary and union-closed graph classes $\mathcal{H}$. Given a graph of $\mathcal{H}$-treewidth $k$, we can compute a 5-approximate tree $\mathcal{H}$-decomposition in time $f(O(k)) \cdot n^{O(1)}$ whenever $\mathcal{H}$-deletion parameterized by solution size can be solved in time $f(k) \cdot n^{O(1)}$ for some function $f(k) \geq 2^k$. The current-best algorithms either achieve an approximation factor of $k^{O(1)}$ or construct optimal decompositions while suffering from non-uniformity with unknown parameter dependence. Using these decompositions, we obtain algorithms solving Odd Cycle Transversal in time $2^{O(k)} \cdot n^{O(1)}$ parameterized by $\mathsf{bipartite}$-treewidth and Vertex Planarization in time $2^{O(k \log k)} \cdot n^{O(1)}$ parameterized by $\mathsf{planar}$-treewidth, showing that these can be as fast as the solution-size parameterizations and giving the first ETH-tight algorithms for parameterizations by hybrid width measures.

We study the problem of extracting randomness from somewhere-random sources, and related combinatorial phenomena: partition analogues of Shearer's lemma on projections. A somewhere-random source is a tuple $(X_1, \ldots, X_t)$ of (possibly correlated) $\{0,1\}^n$-valued random variables $X_i$ where for some unknown $i \in [t]$, $X_i$ is guaranteed to be uniformly distributed. An $extracting$ $merger$ is a seeded device that takes a somewhere-random source as input and outputs nearly uniform random bits. We study the seed-length needed for extracting mergers with constant $t$ and constant error. We show: $\cdot$ Just like in the case of standard extractors, seedless extracting mergers with even just one output bit do not exist. $\cdot$ Unlike the case of standard extractors, it $is$ possible to have extracting mergers that output a constant number of bits using only constant seed. Furthermore, a random choice of merger does not work for this purpose! $\cdot$ Nevertheless, just like in the case of standard extractors, an extracting merger which gets most of the entropy out (namely, having $\Omega$ $(n)$ output bits) must have $\Omega$ $(\log n)$ seed. This is the main technical result of our work, and is proved by a second-moment strengthening of the graph-theoretic approach of Radhakrishnan and Ta-Shma to extractors. In contrast, seed-length/output-length tradeoffs for condensing mergers (where the output is only required to have high min-entropy), can be fully explained by using standard condensers. Inspired by such considerations, we also formulate a new and basic class of problems in combinatorics: partition analogues of Shearer's lemma. We show basic results in this direction; in particular, we prove that in any partition of the $3$-dimensional cube $[0,1]^3$ into two parts, one of the parts has an axis parallel $2$-dimensional projection of area at least $3/4$.

In this work, we consider a rational approximation of the exponential function to design an algorithm for computing matrix exponential in the Hermitian case. Using partial fraction decomposition, we obtain a parallelizable method, where the computation reduces to independent resolutions of linear systems. We analyze the effects of rounding errors on the accuracy of our algorithm. We complete this work with numerical tests showing the efficiency of our method and a comparison of its performances with Krylov algorithms.

The multivariate Hawkes process is a past-dependent point process used to model the relationship of event occurrences between different phenomena.Although the Hawkes process was originally introduced to describe excitation effects, which means that one event increases the chances of another occurring, there has been a growing interest in modelling the opposite effect, known as inhibition.In this paper, we focus on how to infer the parameters of a multidimensional exponential Hawkes process with both excitation and inhibition effects. Our first result is to prove the identifiability of this model under a few sufficient assumptions. Then we propose a maximum likelihood approach to estimate the interaction functions, which is, to the best of our knowledge, the first exact inference procedure in the frequentist framework.Our method includes a variable selection step in order to recover the support of interactions and therefore to infer the connectivity graph.A benefit of our method is to provide an explicit computation of the log-likelihood, which enables in addition to perform a goodness-of-fit test for assessing the quality of estimations.We compare our method to standard approaches, which were developed in the linear framework and are not specifically designed for handling inhibiting effects.We show that the proposed estimator performs better on synthetic data than alternative approaches. We also illustrate the application of our procedure to a neuronal activity dataset, which highlights the presence of both exciting and inhibiting effects between neurons.

In PATH SET PACKING, the input is an undirected graph $G$, a collection $\calp$ of simple paths in $G$, and a positive integer $k$. The problem is to decide whether there exist $k$ edge-disjoint paths in $\calp$. We study the parameterized complexity of PATH SET PACKING with respect to both natural and structural parameters. We show that the problem is $W[1]$-hard with respect to vertex cover number, and $W[1]$-hard respect to pathwidth plus maximum degree plus solution size. These results answer an open question raised in COCOON 2018. On the positive side, we present an FPT algorithm parameterized by feedback vertex number plus maximum degree, and present an FPT algorithm parameterized by treewidth plus maximum degree plus maximum length of a path in $\calp$. These positive results complement the hardness of PATH SET PACKING with respect to any subset of the parameters used in the FPT algorithms. We also give a $4$-approximation algorithm for PATH SET PACKING which runs in FPT time when parameterized by feedback edge number.

The extremal theory of forbidden 0--1 matrices studies the asymptotic growth of the function $\mathrm{Ex}(P,n)$, which is the maximum weight of a matrix $A\in\{0,1\}^{n\times n}$ whose submatrices avoid a fixed pattern $P\in\{0,1\}^{k\times l}$. This theory has been wildly successful at resolving problems in combinatorics, discrete and computational geometry, structural graph theory, and the analysis of data structures, particularly corollaries of the dynamic optimality conjecture. All these applications use acyclic patterns, meaning that when $P$ is regarded as the adjacency matrix of a bipartite graph, the graph is acyclic. The biggest open problem in this area is to bound $\mathrm{Ex}(P,n)$ for acyclic $P$. Prior results have only ruled out the strict $O(n\log n)$ bound conjectured by Furedi and Hajnal. It is consistent with prior results that $\forall P. \mathrm{Ex}(P,n)\leq n\log^{1+o(1)} n$, and also consistent that $\forall \epsilon>0.\exists P. \mathrm{Ex}(P,n) \geq n^{2-\epsilon}$. In this paper we establish a stronger lower bound on the extremal functions of acyclic $P$. Specifically, we give a new construction of relatively dense 0--1 matrices with $\Theta(n(\log n/\log\log n)^t)$ 1s that avoid an acyclic $X_t$. Pach and Tardos have conjectured that this type of result is the best possible, i.e., no acyclic $P$ exists for which $\mathrm{Ex}(P,n)\geq n(\log n)^{\omega(1)}$.

We prove the first polynomial separation between randomized and deterministic time-space tradeoffs of multi-output functions. In particular, we present a total function that on the input of $n$ elements in $[n]$, outputs $O(n)$ elements, such that: (1) There exists a randomized oblivious algorithm with space $O(\log n)$, time $O(n\log n)$ and one-way access to randomness, that computes the function with probability $1-O(1/n)$; (2) Any deterministic oblivious branching program with space $S$ and time $T$ that computes the function must satisfy $T^2S\geq\Omega(n^{2.5}/\log n)$. This implies that logspace randomized algorithms for multi-output functions cannot be black-box derandomized without an $\widetilde{\Omega}(n^{1/4})$ overhead in time. Since previously all the polynomial time-space tradeoffs of multi-output functions are proved via the Borodin-Cook method, which is a probabilistic method that inherently gives the same lower bound for randomized and deterministic branching programs, our lower bound proof is intrinsically different from previous works. We also examine other natural candidates for proving such separations, and show that any polynomial separation for these problems would resolve the long-standing open problem of proving $n^{1+\Omega(1)}$ time lower bound for decision problems with $\mathrm{polylog}(n)$ space.

The goal of this paper is to understand how exponential-time approximation algorithms can be obtained from existing polynomial-time approximation algorithms, existing parameterized exact algorithms, and existing parameterized approximation algorithms. More formally, we consider a monotone subset minimization problem over a universe of size $n$ (e.g., Vertex Cover or Feedback Vertex Set). We have access to an algorithm that finds an $\alpha$-approximate solution in time $c^k \cdot n^{O(1)}$ if a solution of size $k$ exists (and more generally, an extension algorithm that can approximate in a similar way if a set can be extended to a solution with $k$ further elements). Our goal is to obtain a $d^n \cdot n^{O(1)}$ time $\beta$-approximation algorithm for the problem with $d$ as small as possible. That is, for every fixed $\alpha,c,\beta \geq 1$, we would like to determine the smallest possible $d$ that can be achieved in a model where our problem-specific knowledge is limited to checking the feasibility of a solution and invoking the $\alpha$-approximate extension algorithm. Our results completely resolve this question: (1) For every fixed $\alpha,c,\beta \geq 1$, a simple algorithm (``approximate monotone local search'') achieves the optimum value of $d$. (2) Given $\alpha,c,\beta \geq 1$, we can efficiently compute the optimum $d$ up to any precision $\varepsilon > 0$. Earlier work presented algorithms (but no lower bounds) for the special case $\alpha = \beta = 1$ [Fomin et al., J. ACM 2019] and for the special case $\alpha = \beta > 1$ [Esmer et al., ESA 2022]. Our work generalizes these results and in particular confirms that the earlier algorithms are optimal in these special cases.

A new model called Clustering with Neural Network and Index (CNNI) is introduced. CNNI uses a Neural Network to cluster data points. Training of the Neural Network mimics supervised learning, with an internal clustering evaluation index acting as the loss function. An experiment is conducted to test the feasibility of the new model, and compared with results of other clustering models like K-means and Gaussian Mixture Model (GMM). The result shows CNNI can work properly for clustering data; CNNI equipped with MMJ-SC, achieves the first parametric (inductive) clustering model that can deal with non-convex shaped (non-flat geometry) data.

In this paper, we present a stochastic gradient algorithm for minimizing a smooth objective function that is an expectation over noisy cost samples, and only the latter are observed for any given parameter. Our algorithm employs a gradient estimation scheme with random perturbations, which are formed using the truncated Cauchy distribution from the delta sphere. We analyze the bias and variance of the proposed gradient estimator. Our algorithm is found to be particularly useful in the case when the objective function is non-convex, and the parameter dimension is high. From an asymptotic convergence analysis, we establish that our algorithm converges almost surely to the set of stationary points of the objective function and obtains the asymptotic convergence rate. We also show that our algorithm avoids unstable equilibria, implying convergence to local minima. Further, we perform a non-asymptotic convergence analysis of our algorithm. In particular, we establish here a non-asymptotic bound for finding an epsilon-stationary point of the non-convex objective function. Finally, we demonstrate numerically through simulations that the performance of our algorithm outperforms GSF, SPSA, and RDSA by a significant margin over a few non-convex settings and further validate its performance over convex (noisy) objectives.

北京阿比特科技有限公司