亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the problem of graph clustering where the goal is to partition a graph into clusters, i.e. disjoint subsets of vertices, such that each cluster is well connected internally while sparsely connected to the rest of the graph. In particular, we use a natural bicriteria notion motivated by Kannan, Vempala, and Vetta which we refer to as {\em expander decomposition}. Expander decomposition has become one of the building blocks in the design of fast graph algorithms, most notably in the nearly linear time Laplacian solver by Spielman and Teng, and it also has wide applications in practice. We design algorithm for the parametrized version of expander decomposition, where given a graph $G$ of $m$ edges and a parameter $\phi$, our algorithm finds a partition of the vertices into clusters such that each cluster induces a subgraph of conductance at least $\phi$ (i.e. a $\phi$ expander), and only a $\widetilde{O}(\phi)$ fraction of the edges in $G$ have endpoints across different clusters. Our algorithm runs in $\widetilde{O}(m/\phi)$ time, and is the first nearly linear time algorithm when $\phi$ is at least $1/\log^{O(1)} m$, which is the case in most practical settings and theoretical applications. Previous results either take $\Omega(m^{1+o(1)})$ time, or attain nearly linear time but with a weaker expansion guarantee where each output cluster is guaranteed to be contained inside some unknown $\phi$ expander. Our result achieve both nearly linear running time and the strong expander guarantee for clusters. Moreover, a main technique we develop for our result can be applied to obtain a much better \emph{expander pruning} algorithm, which is the key tool for maintaining an expander decomposition on dynamic graphs. Finally, we note that our algorithm is developed from first principles based on relatively simple and basic techniques, thus making it very likely to be practical.

相關內容

We give practical, efficient algorithms that automatically determine the distributed round complexity of a given locally checkable graph problem, in two settings. We present one algorithm for unrooted regular trees and another algorithm for rooted regular trees. The algorithms take the description of a locally checkable labeling problem as input, and the running time is polynomial in the size of the problem description. The algorithms decide if the problem is solvable in $O(\log n)$ rounds. If not, it is known that the complexity has to be $\Theta(n^{1/k})$ for some $k = 1, 2, \dotsc$, and in this case the algorithms also output the right value of the exponent $k$. In rooted trees in the $O(\log n)$ case we can then further determine the exact complexity class by using algorithms from prior work; for unrooted trees the more fine-grained classification in the $O(\log n)$ region remains an open question.

Weighted round robin (WRR) is a simple, efficient packet scheduler providing low latency and fairness by assigning flow weights that define the number of possible packets to be sent consecutively. A variant of WRR that mitigates its tendency to increase burstiness, called interleaved weighted round robin (IWRR), has seen analytical treatment recently \cite{TLBB21}; a network calculus approach was used to obtain the best-possible strict service curve. From a different perspective, WRR can also be interpreted as an emulation of an idealized fair scheduler known as generalized processor sharing (GPS). Inspired by profound literature results on the performance analysis of GPS, we show that both, WRR and IWRR, belong to a larger class of fair schedulers called bandwidth-sharing policies. We use this insight to derive new strict service curves for both schedulers that, under the additional assumption of constrained cross-traffic flows, can significantly improve the state-of-the-art results and lead to smaller delay bounds.

Service robots in the future need to execute abstract instructions such as "fetch the milk from the fridge". To translate such instructions into actionable plans, robots require in-depth background knowledge. With regards to interactions with doors and drawers, robots require articulation models that they can use for state estimation and motion planning. Existing frameworks model articulated connections as abstract concepts such as prismatic, or revolute, but do not provide a parameterized model of these connections for computation. In this paper, we introduce a novel framework that uses symbolic mathematical expressions to model articulated structures -- robots and objects alike -- in a unified and extensible manner. We provide a theoretical description of this framework, and the operations that are supported by its models, and introduce an architecture to exchange our models in robotic applications, making them as flexible as any other environmental observation. To demonstrate the utility of our approach, we employ our practical implementation Kineverse for solving common robotics tasks from state estimation and mobile manipulation, and use it further in real-world mobile robot manipulation.

The problem of worst case edge deletion from a network is considered. Suppose that you have a communication network and you can delete a single edge. Which edge deletion causes the largest disruption? More formally, given a graph, which edge after deletion disconnects the maximum number of pairs of vertices, where ties for number of pairs disconnected are broken by finding an edge that increases the average shortest path length the maximum amount. This problem is interesting both practically and theoretically. We call it the \emph{single edge deletion problem}. Our contributions include formally defining the single edge deletion problem and providing motivations from network analysis. Also, we give an algorithm that solves the problem much faster than a naive solution. The algorithm incorporates sophisticated and novel techniques, and generalises to the problem of computing the all-pairs shortest paths table after deleting each edge individually. This means the algorithm has deep theoretical interest as well as the potential for even wider applications than those we present here.

We show $\textsf{EOPL}=\textsf{PLS}\cap\textsf{PPAD}$. Here the class $\textsf{EOPL}$ consists of all total search problems that reduce to the End-of-Potential-Line problem, which was introduced in the works by Hubacek and Yogev (SICOMP 2020) and Fearnley et al. (JCSS 2020). In particular, our result yields a new simpler proof of the breakthrough collapse $\textsf{CLS}=\textsf{PLS}\cap\textsf{PPAD}$ by Fearnley et al. (STOC 2021). We also prove a companion result $\textsf{SOPL}=\textsf{PLS}\cap\textsf{PPADS}$, where $\textsf{SOPL}$ is the class associated with the Sink-of-Potential-Line problem.

We consider fixpoint algorithms for two-player games on graphs with $\omega$-regular winning conditions, where the environment is constrained by a strong transition fairness assumption. Strong transition fairness is a widely occurring special case of strong fairness, which requires that any execution is strongly fair with respect to a specified set of live edges: whenever the source vertex of a live edge is visited infinitely often along a play, the edge itself is traversed infinitely often along the play as well. We show that, surprisingly, strong transition fairness retains the algorithmic characteristics of the fixpoint algorithms for $\omega$-regular games -- the new algorithms can be obtained simply by replacing certain occurrences of the controllable predecessor by a new almost sure predecessor operator. For Rabin games with $k$ pairs, the complexity of the new algorithm is $O(n^{k+2}k!)$ symbolic steps, which is independent of the number of live edges in the strong transition fairness assumption. Further, we show that GR(1) specifications with strong transition fairness assumptions can be solved with a 3-nested fixpoint algorithm, same as the usual algorithm. In contrast, strong fairness necessarily requires increasing the alternation depth depending on the number of fairness assumptions. We get symbolic algorithms for (generalized) Rabin, parity and GR(1) objectives under strong transition fairness assumptions as well as a direct symbolic algorithm for qualitative winning in stochastic $\omega$-regular games that runs in $O(n^{k+2}k!)$ symbolic steps, improving the state of the art. Finally, we have implemented a BDD-based synthesis engine based on our algorithm. We show on a set of synthetic and real benchmarks that our algorithm is scalable, parallelizable, and outperforms previous algorithms by orders of magnitude.

Let $\mathcal{G}$ be a directed graph with vertices $1,2,\ldots, 2N$. Let $\mathcal{T}=(T_{i,j})_{(i,j)\in\mathcal{G}}$ be a family of contractive similarity mappings. For every $1\leq i\leq N$, let $i^+:=i+N$. Let $\mathcal{M}_{i,j}=\{(i,j),(i,j^+),(i^+,j),(i^+,j^+)\}\cap\mathcal{G}$. We assume that $T_{\widetilde{i},\widetilde{j}}=T_{i,j}$ for every $(\widetilde{i},\widetilde{j})\in \mathcal{M}_{i,j}$. Let $K$ denote the Mauldin-Williams fractal determined by $\mathcal{T}$. Let $\chi=(\chi_i)_{i=1}^{2N}$ be a positive probability vector and $P$ a row-stochastic matrix which serves as an incidence matrix for $\mathcal{G}$. Let $\nu$ be the Markov-type measure associated with $\chi$ and $P$. Let $\Omega=\{1,\ldots,2N\}$ and $G_\infty=\{\sigma\in\Omega^{\mathbb{N}}:(\sigma_i,\sigma_{i+1})\in\mathcal{G}, \;i\geq 1\}$. Let $\pi$ be the natural projection from $G_\infty$ to $K$ and $\mu=\nu\circ\pi^{-1}$. We consider two cases: 1. $\mathcal{G}$ has two strongly connected components consisting of $N$ vertices; 2. $\mathcal{G}$ is strongly connected. With some assumptions for $\mathcal{G}$ and $\mathcal{T}$, for case 1, we determine the exact value $s_r$ of $D_r(\mu)$ and prove that the $s_r$-dimensional lower quantization coefficient $\underline{Q}_r^{s_r}(\mu)$ is always positive, but the upper one $\overline{Q}_r^{s_r}(\mu)$ can be infinite; we establish a necessary and sufficient condition for $\overline{Q}_r^{s_r}(\mu)$ to be finite; for case 2, we determine $D_r(\mu)=:t_r$ by means of a pressure-like function and prove that $\underline{Q}_r^{t_r}(\mu)$ and $\overline{Q}_r^{t_r}(\mu)$ are always positive and finite.

Escaping saddle points is a central research topic in nonconvex optimization. In this paper, we propose a simple gradient-based algorithm such that for a smooth function $f\colon\mathbb{R}^n\to\mathbb{R}$, it outputs an $\epsilon$-approximate second-order stationary point in $\tilde{O}(\log n/\epsilon^{1.75})$ iterations. Compared to the previous state-of-the-art algorithms by Jin et al. with $\tilde{O}((\log n)^{4}/\epsilon^{2})$ or $\tilde{O}((\log n)^{6}/\epsilon^{1.75})$ iterations, our algorithm is polynomially better in terms of $\log n$ and matches their complexities in terms of $1/\epsilon$. For the stochastic setting, our algorithm outputs an $\epsilon$-approximate second-order stationary point in $\tilde{O}((\log n)^{2}/\epsilon^{4})$ iterations. Technically, our main contribution is an idea of implementing a robust Hessian power method using only gradients, which can find negative curvature near saddle points and achieve the polynomial speedup in $\log n$ compared to the perturbed gradient descent methods. Finally, we also perform numerical experiments that support our results.

Large scale knowledge graph embedding has attracted much attention from both academia and industry in the field of Artificial Intelligence. However, most existing methods concentrate solely on fact triples contained in the given knowledge graph. Inspired by the fact that logic rules can provide a flexible and declarative language for expressing rich background knowledge, it is natural to integrate logic rules into knowledge graph embedding, to transfer human knowledge to entity and relation embedding, and strengthen the learning process. In this paper, we propose a novel logic rule-enhanced method which can be easily integrated with any translation based knowledge graph embedding model, such as TransE . We first introduce a method to automatically mine the logic rules and corresponding confidences from the triples. And then, to put both triples and mined logic rules within the same semantic space, all triples in the knowledge graph are represented as first-order logic. Finally, we define several operations on the first-order logic and minimize a global loss over both of the mined logic rules and the transformed first-order logics. We conduct extensive experiments for link prediction and triple classification on three datasets: WN18, FB166, and FB15K. Experiments show that the rule-enhanced method can significantly improve the performance of several baselines. The highlight of our model is that the filtered Hits@1, which is a pivotal evaluation in the knowledge inference task, has a significant improvement (up to 700% improvement).

Developing classification algorithms that are fair with respect to sensitive attributes of the data has become an important problem due to the growing deployment of classification algorithms in various social contexts. Several recent works have focused on fairness with respect to a specific metric, modeled the corresponding fair classification problem as a constrained optimization problem, and developed tailored algorithms to solve them. Despite this, there still remain important metrics for which we do not have fair classifiers and many of the aforementioned algorithms do not come with theoretical guarantees; perhaps because the resulting optimization problem is non-convex. The main contribution of this paper is a new meta-algorithm for classification that takes as input a large class of fairness constraints, with respect to multiple non-disjoint sensitive attributes, and which comes with provable guarantees. This is achieved by first developing a meta-algorithm for a large family of classification problems with convex constraints, and then showing that classification problems with general types of fairness constraints can be reduced to those in this family. We present empirical results that show that our algorithm can achieve near-perfect fairness with respect to various fairness metrics, and that the loss in accuracy due to the imposed fairness constraints is often small. Overall, this work unifies several prior works on fair classification, presents a practical algorithm with theoretical guarantees, and can handle fairness metrics that were previously not possible.

北京阿比特科技有限公司