亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The isomorphism problem for graphs (GI) and the isomorphism problem for groups (GrISO) have been studied extensively by researchers. The current best algorithms for both these problems run in quasipolynomial time. In this paper, we study the isomorphism problem of graphs that are defined in terms of groups, namely power graphs, directed power graphs, and enhanced power graphs. It is not enough to check the isomorphism of the underlying groups to solve the isomorphism problem of such graphs as the power graphs (or the directed power graphs or the enhanced power graphs) of two nonisomorphic groups can be isomorphic. Nevertheless, it is interesting to ask if the underlying group structure can be exploited to design better isomorphism algorithms for these graphs. We design polynomial time algorithms for the isomorphism problems for the power graphs, the directed power graphs and the enhanced power graphs arising from finite nilpotent groups. In contrast, no polynomial time algorithm is known for the group isomorphism problem, even for nilpotent groups of class 2. We note that our algorithm does not require the underlying groups of the input graphs to be given. The isomorphism problems of power graphs and enhanced power graphs are solved by first computing the directed power graphs from the input graphs. The problem of efficiently computing the directed power graph from the power graph or the enhanced power graph is due to Cameron [IJGT'22]. Therefore, we give a solution to Cameron's question.

相關內容

We prove tight bounds on the site percolation threshold for $k$-uniform hypergraphs of maximum degree $\Delta$ and for $k$-uniform hypergraphs of maximum degree $\Delta$ in which any pair of edges overlaps in at most $r$ vertices. The hypergraphs that achieve these bounds are hypertrees, but unlike in the case of graphs, there are many different $k$-uniform, $\Delta$-regular hypertrees. Determining the extremal tree for a given $k, \Delta, r$ involves an optimization problem, and our bounds arise from a convex relaxation of this problem. By combining our percolation bounds with the method of disagreement percolation we obtain improved bounds on the uniqueness threshold for the hard-core model on hypergraphs satisfying the same constraints. Our uniqueness conditions imply exponential weak spatial mixing, and go beyond the known bounds for rapid mixing of local Markov chains and existence of efficient approximate counting and sampling algorithms. Our results lead to natural conjectures regarding the aforementioned algorithmic tasks, based on the intuition that uniqueness thresholds for the extremal hypertrees for percolation determine computational thresholds.

Given a graph $G = (V, E)$, a non-empty set $S \subseteq V$ is a defensive alliance, if for every vertex $v \in S$, the majority of its closed neighbours are in $S$, that is, $|N_G[v] \cap S| \geq |N_G[v] \setminus S|$. The decision version of the problem is known to be NP-Complete even when restricted to split and bipartite graphs. The problem is \textit{fixed-parameter tractable} for the parameters solution size, vertex cover number and neighbourhood diversity. For the parameters treewidth and feedback vertex set number, the problem is W[1]-hard. \\ \hspace*{2em} In this paper, we study the defensive alliance problem for graphs with bounded degree. We show that the problem is \textit{polynomial-time solvable} on graphs with maximum degree at most 5 and NP-Complete on graphs with maximum degree 6. This rules out the fixed-parameter tractability of the problem for the parameter maximum degree of the graph. We also consider the problem from the standpoint of parameterized complexity. We provide an FPT algorithm using the Integer Linear Programming approach for the parameter distance to clique. We also answer an open question posed in \cite{AG2} by providing an FPT algorithm for the parameter twin cover.

This paper introduces a collection of four data sets, similar to Anscombe's Quartet, that aim to highlight the challenges involved when estimating causal effects. Each of the four data sets is generated based on a distinct causal mechanism: the first involves a collider, the second involves a confounder, the third involves a mediator, and the fourth involves the induction of M-Bias by an included factor. The paper includes a mathematical summary of each data set, as well as directed acyclic graphs that depict the relationships between the variables. Despite the fact that the statistical summaries and visualizations for each data set are identical, the true causal effect differs, and estimating it correctly requires knowledge of the data-generating mechanism. These example data sets can help practitioners gain a better understanding of the assumptions underlying causal inference methods and emphasize the importance of gathering more information beyond what can be obtained from statistical tools alone. The paper also includes R code for reproducing all figures and provides access to the data sets themselves through an R package named quartets.

We study constant-cost randomized communication problems and relate them to implicit graph representations in structural graph theory. Specifically, constant-cost communication problems correspond to hereditary graph families that admit constant-size adjacency sketches, or equivalently constant-size probabilistic universal graphs (PUGs), and these graph families are a subset of families that admit adjacency labeling schemes of size O(log n), which are the subject of the well-studied implicit graph question (IGQ). We initiate the study of the hereditary graph families that admit constant-size PUGs, with the two (equivalent) goals of (1) understanding randomized constant-cost communication problems, and (2) understanding a probabilistic version of the IGQ. For each family $\mathcal F$ studied in this paper (including the monogenic bipartite families, product graphs, interval and permutation graphs, families of bounded twin-width, and others), it holds that the subfamilies $\mathcal H \subseteq \mathcal F$ admit constant-size PUGs (i.e. adjacency sketches) if and only if they are stable (i.e. they forbid a half-graph as a semi-induced subgraph). The correspondence between communication problems and hereditary graph families allows for a new method of constructing adjacency labeling schemes. By this method, we show that the induced subgraphs of any Cartesian products are positive examples to the IGQ. We prove that this probabilistic construction cannot be derandomized by using an Equality oracle, i.e. the Equality oracle cannot simulate the k-Hamming Distance communication protocol. We also obtain constant-size sketches for deciding $\mathsf{dist}(x, y) \le k$ for vertices $x$, $y$ in any stable graph family with bounded twin-width. This generalizes to constant-size sketches for deciding first-order formulas over the same graphs.

The Number needed to treat (NNT) is an efficacy index defined as the average number of patients needed to treat to attain one additional treatment benefit. In observational studies, specifically in epidemiology, the adequacy of the populationwise NNT is questionable since the exposed group characteristics may substantially differ from the unexposed. To address this issue, groupwise efficacy indices were defined: the Exposure Impact Number (EIN) for the exposed group and the Number Needed to Expose (NNE) for the unexposed. Each defined index answers a unique research question since it targets a unique sub-population. In observational studies, the group allocation is typically affected by confounders that might be unmeasured. The available estimation methods that rely either on randomization or the sufficiency of the measured covariates for confounding control will result in inconsistent estimators of the true NNT (EIN, NNE) in such settings. Using Rubin's potential outcomes framework, we explicitly define the NNT and its derived indices as causal contrasts. Next, we introduce a novel method that uses instrumental variables to estimate the three aforementioned indices in observational studies. We present two analytical examples and a corresponding simulation study. The simulation study illustrates that the novel estimators are consistent, unlike the previously available methods, and their confidence intervals meet the nominal coverage rates. Finally, a real-world data example of the effect of vitamin D deficiency on the mortality rate is presented.

Five Cells is a logic puzzle consisting of a rectangular grid, with some cells containg a number. The player has to partition the grid into blocks, each consisting of five cells, such that the number in each cell must be equal to the number of edges of that cell that are borders of blocks. In this paper, we propose a physical zero-knowledge proof protocol for Five Cells using a deck of playing cards, which allows a prover to physically show that he/she knows a solution of the puzzle without revealing it. More importantly, in the optimization we develop a technique to verify a graph coloring that no two adjacent vertices have the same color without revealing any information about the coloring. This technique reduces the number of required cards in our protocol from quadratic to linear in the number of cells, and can also be used in other protocols related to graph coloring.

We consider a generalized poset sorting problem (GPS), in which we are given a query graph $G = (V, E)$ and an unknown poset $\mathcal{P}(V, \prec)$ that is defined on the same vertex set $V$, and the goal is to make as few queries as possible to edges in $G$ in order to fully recover $\mathcal{P}$, where each query $(u, v)$ returns the relation between $u, v$, i.e., $u \prec v$, $v \prec u$ or $u \not \sim v$. This generalizes both the poset sorting problem [Faigle et al., SICOMP 88] and the generalized sorting problem [Huang et al., FOCS 11]. We give algorithms with $\tilde{O}(n\cdot \mathrm{poly}(k))$ query complexity when $G$ is a complete bipartite graph or $G$ is stochastic under the \ER model, where $k$ is the \emph{width} of the poset, and these generalize [Daskalakis et al., SICOMP 11] which only studies complete graph $G$. Both results are based on a unified framework that reduces the poset sorting to partitioning the vertices with respect to a given pivot element, which may be of independent interest. Our study of GPS also leads to a new $\tilde{O}(n^{1 - 1 / (2W)})$ competitive ratio for the so-called weighted generalized sorting problem where $W$ is the number of distinct weights in the query graph. This problem was considered as an open question in [Charikar et al., JCSS 02], and our result makes important progress as it yields the first nontrivial sublinear ratio for general weighted query graphs (for any bounded $W$). We obtain this via an $\tilde{O}(nk + n^{1.5})$ query complexity algorithm for the case where every edge in $G$ is guaranteed to be comparable in the poset, which generalizes a $\tilde{O}(n^{1.5})$ bound for generalized sorting [Huang et al., FOCS 11].

Computing the diameter of a graph, i.e. the largest distance, is a fundamental problem that is central in fine-grained complexity. In undirected graphs, the Strong Exponential Time Hypothesis (SETH) yields a lower bound on the time vs. approximation trade-off that is quite close to the upper bounds. In \emph{directed} graphs, however, where only some of the upper bounds apply, much larger gaps remain. Since $d(u,v)$ may not be the same as $d(v,u)$, there are multiple ways to define the problem, the two most natural being the \emph{(one-way) diameter} ($\max_{(u,v)} d(u,v)$) and the \emph{roundtrip diameter} ($\max_{u,v} d(u,v)+d(v,u)$). In this paper we make progress on the outstanding open question for each of them. -- We design the first algorithm for diameter in sparse directed graphs to achieve $n^{1.5-\varepsilon}$ time with an approximation factor better than $2$. The new upper bound trade-off makes the directed case appear more similar to the undirected case. Notably, this is the first algorithm for diameter in sparse graphs that benefits from fast matrix multiplication. -- We design new hardness reductions separating roundtrip diameter from directed and undirected diameter. In particular, a $1.5$-approximation in subquadratic time would refute the All-Nodes $k$-Cycle hypothesis, and any $(2-\varepsilon)$-approximation would imply a breakthrough algorithm for approximate $\ell_{\infty}$-Closest-Pair. Notably, these are the first conditional lower bounds for diameter that are not based on SETH.

Despite widespread adoption in practice, guarantees for the LASSO and Group LASSO are strikingly lacking in settings beyond statistical problems, and these algorithms are usually considered to be a heuristic in the context of sparse convex optimization on deterministic inputs. We give the first recovery guarantees for the Group LASSO for sparse convex optimization with vector-valued features. We show that if a sufficiently large Group LASSO regularization is applied when minimizing a strictly convex function $l$, then the minimizer is a sparse vector supported on vector-valued features with the largest $\ell_2$ norm of the gradient. Thus, repeating this procedure selects the same set of features as the Orthogonal Matching Pursuit algorithm, which admits recovery guarantees for any function $l$ with restricted strong convexity and smoothness via weak submodularity arguments. This answers open questions of Tibshirani et al. and Yasuda et al. Our result is the first to theoretically explain the empirical success of the Group LASSO for convex functions under general input instances assuming only restricted strong convexity and smoothness. Our result also generalizes provable guarantees for the Sequential Attention algorithm, which is a feature selection algorithm inspired by the attention mechanism proposed by Yasuda et al. As an application of our result, we give new results for the column subset selection problem, which is well-studied when the loss is the Frobenius norm or other entrywise matrix losses. We give the first result for general loss functions for this problem that requires only restricted strong convexity and smoothness.

Recent years have witnessed the enormous success of low-dimensional vector space representations of knowledge graphs to predict missing facts or find erroneous ones. Currently, however, it is not yet well-understood how ontological knowledge, e.g. given as a set of (existential) rules, can be embedded in a principled way. To address this shortcoming, in this paper we introduce a framework based on convex regions, which can faithfully incorporate ontological knowledge into the vector space embedding. Our technical contribution is two-fold. First, we show that some of the most popular existing embedding approaches are not capable of modelling even very simple types of rules. Second, we show that our framework can represent ontologies that are expressed using so-called quasi-chained existential rules in an exact way, such that any set of facts which is induced using that vector space embedding is logically consistent and deductively closed with respect to the input ontology.

北京阿比特科技有限公司