亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Consider any locally checkable labeling problem $\Pi$ in rooted regular trees: there is a finite set of labels $\Sigma$, and for each label $x \in \Sigma$ we specify what are permitted label combinations of the children for an internal node of label $x$ (the leaf nodes are unconstrained). This formalism is expressive enough to capture many classic problems studied in distributed computing, including vertex coloring, edge coloring, and maximal independent set. We show that the distributed computational complexity of any such problem $\Pi$ falls in one of the following classes: it is $O(1)$, $\Theta(\log^* n)$, $\Theta(\log n)$, or $\Theta(n)$ rounds in trees with $n$ nodes (and all of these classes are nonempty). We show that the complexity of any given problem is the same in all four standard models of distributed graph algorithms: deterministic LOCAL, randomized LOCAL, deterministic CONGEST, and randomized CONGEST model. In particular, we show that randomness does not help in this setting, and complexity classes such as $\Theta(\log \log n)$ or $\Theta(\sqrt{n})$ do not exist (while they do exist in the broader setting of general trees). We also show how to systematically determine the distributed computational complexity of any such problem $\Pi$. We present an algorithm that, given the description of $\Pi$, outputs the round complexity of $\Pi$ in these models. While the algorithm may take exponential time in the size of the description of $\Pi$, it is nevertheless practical: we provide a freely available implementation of the classifier algorithm, and it is fast enough to classify many typical problems of interest.

相關內容

CC在計算復雜性方面表現突出。它的學科處于數學與計算機理論科學的交叉點,具有清晰的數學輪廓和嚴格的數學格式。官網鏈接: · CASE · 近似 · 總體代價 · 代價 ·
2021 年 9 月 25 日

Stimulated by practical applications arising from viral marketing. This paper investigates a novel Budgeted $k$-Submodular Maximization problem defined as follows: Given a finite set $V$, a budget $B$ and a $k$-submodular function $f: (k+1)^V \mapsto \mathbb{R}_+$, the problem asks to find a solution $\s=(S_1, S_2, \ldots, S_k)$, each element $e \in V$ has a cost $c_i(e)$ to be put into $i$-th set $S_i$, with the total cost of $s$ does not exceed $B$ so that $f(\s)$ is maximized. To address this problem, we propose two streaming algorithms that provide approximation guarantees for the problem. In particular, in the case of each element $e$ has the same cost for all $i$-th sets, we propose a deterministic streaming algorithm which provides an approximation ratio of $\frac{1}{4}-\epsilon$ when $f$ is monotone and $\frac{1}{5}-\epsilon$ when $f$ is non-monotone. For the general case, we propose a random streaming algorithm that provides an approximation ratio of $\min\{\frac{\alpha}{2}, \frac{(1-\alpha)k}{(1+\beta)k-\beta} \}-\epsilon$ when $f$ is monotone and $\min\{\frac{\alpha}{2}, \frac{(1-\alpha)k}{(1+2\beta)k-2\beta} \}-\epsilon$ when $f$ is non-monotone in expectation, where $\beta=\max_{e\in V, i , j \in [k], i\neq j} \frac{c_i(e)}{c_j(e)}$ and $\epsilon, \alpha$ are fixed inputs.

We present an a posteriori error estimate based on equilibrated stress reconstructions for the finite element approximation of a unilateral contact problem with weak enforcement of the contact conditions. We start by proving a guaranteed upper bound for the dual norm of the residual. This norm is shown to control the natural energy norm up to a boundary term, which can be removed under a saturation assumption. The basic estimate is then refined to distinguish the different components of the error, and is used as a starting point to design an algorithm including adaptive stopping criteria for the nonlinear solver and automatic tuning of a regularization parameter. We then discuss an actual way of computing the stress reconstruction based on the Arnold-Falk-Winther finite elements. Finally, after briefly discussing the efficiency of our estimators, we showcase their performance on a panel of numerical tests.

In the \textsc{Waypoint Routing Problem} one is given an undirected capacitated and weighted graph $G$, a source-destination pair $s,t\in V(G)$ and a set $W\subseteq V(G)$, of \emph{waypoints}. The task is to find a walk which starts at the source vertex $s$, visits, in any order, all waypoints, ends at the destination vertex $t$, respects edge capacities, that is, traverses each edge at most as many times as is its capacity, and minimizes the cost computed as the sum of costs of traversed edges with multiplicities. We study the problem for graphs of bounded treewidth and present a new algorithm for the problem working in $2^{O(\mathrm{tw})}\cdot n$ time, significantly improving upon the previously known algorithms. We also show that this running time is optimal for the problem under Exponential Time Hypothesis.

Coloring unit-disk graphs efficiently is an important problem in the global and distributed setting, with applications in radio channel assignment problems when the communication relies on omni-directional antennas of the same power. In this context it is important to bound not only the complexity of the coloring algorithms, but also the number of colors used. In this paper, we consider two natural distributed settings. In the location-aware setting (when nodes know their coordinates in the plane), we give a constant time distributed algorithm coloring any unit-disk graph $G$ with at most $(3+\epsilon)\omega(G)+6$ colors, for any constant $\epsilon>0$, where $\omega(G)$ is the clique number of $G$. This improves upon a classical 3-approximation algorithm for this problem, for all unit-disk graphs whose chromatic number significantly exceeds their clique number. When nodes do not know their coordinates in the plane, we give a distributed algorithm in the LOCAL model that colors every unit-disk graph $G$ with at most $5.68\omega(G)$ colors in (see \cref{sec:comput}) rounds. Moreover, when $\omega(G)=O(1)$, the algorithm runs in $O(\log^* n)$ rounds. This algorithm is based on a study of the local structure of unit-disk graphs, which is of independent interest. We conjecture that every unit-disk graph $G$ has average degree at most $4\omega(G)$, which would imply the existence of a $O(\log n)$ round algorithm coloring any unit-disk graph $G$ with (approximatively) $4\omega(G)$ colors.

Edge computing has been an efficient way to provide prompt and near-data computing services for resource-and-delay sensitive IoT applications via computation offloading. Effective computation offloading strategies need to comprehensively cope with several major issues, including the allocation of dynamic communication and computational resources, the deadline constraints of heterogeneous tasks, and the requirements for computationally inexpensive and distributed algorithms. However, most of the existing works mainly focus on part of these issues, which would not suffice to achieve expected performance in complex and practical scenarios. To tackle this challenge, in this paper, we systematically study a distributed computation offloading problem with hard delay constraints, where heterogeneous computational tasks require continually offloading to a set of edge servers via a limiting number of stochastic communication channels. The task offloading problem is then cast as a delay-constrained long-term stochastic optimization problem under unknown priori statistical knowledge. To resolve this problem, we first provide a technical path to transform and decompose it into several slot-level subproblems, then we develop a distributed online algorithm, namely TODG, to efficiently allocate the resources and schedule the offloading tasks with delay guarantees. Further, we present a comprehensive analysis for TODG, in terms of the optimality gap, the delay guarantees, and the impact of system parameters. Extensive simulation results demonstrate the effectiveness and efficiency of TODG.

A streaming algorithm is considered to be adversarially robust if it provides correct outputs with high probability even when the stream updates are chosen by an adversary who may observe and react to the past outputs of the algorithm. We grow the burgeoning body of work on such algorithms in a new direction by studying robust algorithms for the problem of maintaining a valid vertex coloring of an $n$-vertex graph given as a stream of edges. Following standard practice, we focus on graphs with maximum degree at most $\Delta$ and aim for colorings using a small number $f(\Delta)$ of colors. A recent breakthrough (Assadi, Chen, and Khanna; SODA~2019) shows that in the standard, non-robust, streaming setting, $(\Delta+1)$-colorings can be obtained while using only $\widetilde{O}(n)$ space. Here, we prove that an adversarially robust algorithm running under a similar space bound must spend almost $\Omega(\Delta^2)$ colors and that robust $O(\Delta)$-coloring requires a linear amount of space, namely $\Omega(n\Delta)$. We in fact obtain a more general lower bound, trading off the space usage against the number of colors used. From a complexity-theoretic standpoint, these lower bounds provide (i)~the first significant separation between adversarially robust algorithms and ordinary randomized algorithms for a natural problem on insertion-only streams and (ii)~the first significant separation between randomized and deterministic coloring algorithms for graph streams, since deterministic streaming algorithms are automatically robust. We complement our lower bounds with a suite of positive results, giving adversarially robust coloring algorithms using sublinear space. In particular, we can maintain an $O(\Delta^2)$-coloring using $\widetilde{O}(n \sqrt{\Delta})$ space and an $O(\Delta^3)$-coloring using $\widetilde{O}(n)$ space.

The expansion of Fiber-To-The-Home (FTTH) networks creates high costs due to expensive excavation procedures. Optimizing the planning process and minimizing the cost of the earth excavation work therefore lead to large savings. Mathematically, the FTTH network problem can be described as a minimum Steiner Tree problem. Even though the Steiner Tree problem has already been investigated intensively in the last decades, it might be further optimized with the help of new computing paradigms and emerging approaches. This work studies upcoming technologies, such as Quantum Annealing, Simulated Annealing and nature-inspired methods like Evolutionary Algorithms or slime-mold-based optimization. Additionally, we investigate partitioning and simplifying methods. Evaluated on several real-life problem instances, we could outperform a traditional, widely-used baseline (NetworkX Approximate Solver) on most of the domains. Prior partitioning of the initial graph and the presented slime-mold-based approach were especially valuable for a cost-efficient approximation. Quantum Annealing seems promising, but was limited by the number of available qubits.

The field of dynamic graph algorithms aims at achieving a thorough understanding of real-world networks whose topology evolves with time. Traditionally, the focus has been on the classic sequential, centralized setting where the main quality measure of an algorithm is its update time, i.e. the time needed to restore the solution after each update. While real-life networks are very often distributed across multiple machines, the fundamental question of finding efficient dynamic, distributed graph algorithms received little attention to date. The goal in this setting is to optimize both the round and message complexities incurred per update step, ideally achieving a message complexity that matches the centralized update time in $O(1)$ (perhaps amortized) rounds. Toward initiating a systematic study of dynamic, distributed algorithms, we study some of the most central symmetry-breaking problems: maximal independent set (MIS), maximal matching/(approx-) maximum cardinality matching (MM/MCM), and $(\Delta + 1)$-vertex coloring. This paper focuses on dynamic, distributed algorithms that are deterministic, and in particular -- robust against an adaptive adversary. Most of our focus is on our MIS algorithm, which achieves $O\left(m^{2/3}\log^2 n\right)$ amortized messages in $O\left(\log^2 n\right)$ amortized rounds in the Congest model. Notably, the amortized message complexity of our algorithm matches the amortized update time of the best-known deterministic centralized MIS algorithm by Gupta and Khan [SOSA'21] up to a polylog $n$ factor. The previous best deterministic distributed MIS algorithm, by Assadi et al. [STOC'18], uses $O(m^{3/4})$ amortized messages in $O(1)$ amortized rounds, i.e., we achieve a polynomial improvement in the message complexity by a polylog $n$ increase to the round complexity; moreover, the algorithm of Assadi et al. makes an implicit assumption that the [...]

Extreme multi-label classification (XMC) aims to learn a model that can tag data points with a subset of relevant labels from an extremely large label set. Real world e-commerce applications like personalized recommendations and product advertising can be formulated as XMC problems, where the objective is to predict for a user a small subset of items from a catalog of several million products. For such applications, a common approach is to organize these labels into a tree, enabling training and inference times that are logarithmic in the number of labels. While training a model once a label tree is available is well studied, designing the structure of the tree is a difficult task that is not yet well understood, and can dramatically impact both model latency and statistical performance. Existing approaches to tree construction fall at an extreme point, either optimizing exclusively for statistical performance, or for latency. We propose an efficient information theory inspired algorithm to construct intermediary operating points that trade off between the benefits of both. Our algorithm enables interpolation between these objectives, which was not previously possible. We corroborate our theoretical analysis with numerical results, showing that on the Wiki-500K benchmark dataset our method can reduce a proxy for expected latency by up to 28% while maintaining the same accuracy as Parabel. On several datasets derived from e-commerce customer logs, our modified label tree is able to improve this expected latency metric by up to 20% while maintaining the same accuracy. Finally, we discuss challenges in realizing these latency improvements in deployed models.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

北京阿比特科技有限公司