亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study how we can accelerate the spreading of information in temporal graphs via delaying operations; a problem that captures real-world applications varying from information flows to distribution schedules. In a temporal graph there is a set of fixed vertices and the available connections between them change over time in a predefined manner. We observe that, in some cases, the delay of some connections can in fact decrease the time required to reach from some vertex (source) to another vertex (target). We study how we can minimize the maximum time a set of source vertices needs to reach every other vertex of the graph when we are allowed to delay some of the connections of the graph. For one source, we prove that the problem is W[2]-hard and NP-hard, when parameterized by the number of allowed delays. On the other hand, we derive a polynomial-time algorithm for one source and unbounded number of delays. This is the best we can hope for; we show that the problem becomes NP-hard when there are two sources and the number of delays is not bounded. We complement our negative result by providing an FPT algorithm parameterized by the treewidth of the graph plus the lifetime of the optimal solution. Finally, we provide polynomial-time algorithms for several classes of graphs.

相關內容

The problem of testing low-degree polynomials has received significant attention over the years due to its importance in theoretical computer science, and in particular in complexity theory. The problem is specified by three parameters: field size $q$, degree $d$ and proximity parameter $\delta$, and the goal is to design a tester making as few as possible queries to a given function, which is able to distinguish between the case the given function has degree at most $d$, and the case the given function is $\delta$-far from any degree $d$ function. A tester is called optimal if it makes $O(q^d+1/\delta)$ queries (which are known to be necessary). For the field of size $q$, the natural $t$-flat tester was shown to be optimal first by Bhattacharyya et al. for $q=2$, and later by Haramaty et al. for all prime powers $q$. The dependency on the field size, however, is a tower-type function. We improve the results above, showing that the dependency on the field size is polynomial. Our approach also applies in the more general setting of lifted affine invariant codes, and is based on studying the structure of the collection of erroneous subspaces. i.e. subspaces $A$ such that $f|_{A}$ has degree greater than $d$. Towards this end, we observe that these sets are poorly expanding in the affine version of the Grassmann graph and use that to establish structural results on them via global hypercontractivity. We then use this structure to perform local correction on $f$.

We study the online scheduling problem where the machines need to be calibrated before processing any jobs. To calibrate a machine, it will take $\lambda$ time steps as the activation time, and then the machine will remain calibrated status for $T$ time steps. The job can only be processed by the machine that is in calibrated status. Given a set of jobs arriving online, each of the jobs is characterized by a release time, a processing time, and a deadline. We assume that there is an infinite number of machines for usage. The objective is to minimize the total number of calibrations while feasibly scheduling all jobs. For the case that all jobs have unit processing times, we propose an $\mathcal{O}(\lambda)$-competitive algorithm, which is asymptotically optimal. When $\lambda=0$, the problem is degraded to rent minimization, where our algorithm achieves a competitive ratio of $3e+7(\approx 15.16)$ which improves upon the previous results for such problems.

Weighted round robin (WRR) is a simple, efficient packet scheduler providing low latency and fairness by assigning flow weights that define the number of possible packets to be sent consecutively. A variant of WRR that mitigates its tendency to increase burstiness, called interleaved weighted round robin (IWRR), has seen analytical treatment recently \cite{TLBB21}; a network calculus approach was used to obtain the best-possible strict service curve. From a different perspective, WRR can also be interpreted as an emulation of an idealized fair scheduler known as generalized processor sharing (GPS). Inspired by profound literature results on the performance analysis of GPS, we show that both, WRR and IWRR, belong to a larger class of fair schedulers called bandwidth-sharing policies. We use this insight to derive new strict service curves for both schedulers that, under the additional assumption of constrained cross-traffic flows, can significantly improve the state-of-the-art results and lead to smaller delay bounds.

Regular functions from infinite words to infinite words can be equivalently specified by MSO-transducers, streaming $\omega$-string transducers as well as deterministic two-way transducers with look-ahead. In their one-way restriction, the latter transducers define the class of rational functions. Even though regular functions are robustly characterised by several finite-state devices, even the subclass of rational functions may contain functions which are not computable (by a Turing machine with infinite input). This paper proposes a decision procedure for the following synthesis problem: given a regular function $f$ (equivalently specified by one of the aforementioned transducer model), is $f$ computable and if it is, synthesize a Turing machine computing it. For regular functions, we show that computability is equivalent to continuity, and therefore the problem boils down to deciding continuity. We establish a generic characterisation of continuity for functions preserving regular languages under inverse image (such as regular functions). We exploit this characterisation to show the decidability of continuity (and hence computability) of rational and regular functions. For rational functions, we show that this can be done in \textsc{NLogSpace} (it was already known to be in \textsc{PTime} by Prieur). In a similar fashion, we also effectively characterise uniform continuity of regular functions, and relate it to the notion of uniform computability, which offers stronger efficiency guarantees.

We consider the problem of partitioning a graph into a non-fixed number of non-overlapping subgraphs of maximum density. The density of a partition is the sum of the densities of the subgraphs, where the density of a subgraph is its average degree, that is, the ratio of its number of edges and its number of vertices. This problem, called Dense Graph Partition, is known to be NP-hard on general graphs and polynomial-time solvable on trees, and polynomial-time 2-approximable. In this paper we study the restriction of Dense Graph Partition to particular sparse and dense graph classes. In particular, we prove that it is NP-hard on dense bipartite graphs as well as on cubic graphs. On dense graphs on $n$ vertices, it is polynomial-time solvable on graphs with minimum degree $n-3$ and NP-hard on $(n-4)$-regular graphs. We prove that it is polynomial-time $4/3$-approximable on cubic graphs and admits an efficient polynomial-time approximation scheme on graphs of minimum degree $n-t$ for any constant $t\geq 4$.

In scheduling, we are given a set of jobs, together with a number of machines and our goal is to decide for every job, when and on which machine(s) it should be scheduled in order to minimize some objective function. Different machine models, job characteristics and objective functions result in a multitude of scheduling problems and many of them are NP-hard, even for a fixed number of identical machines. However, using pseudo-polynomial or approximation algorithms, we can still hope to solve some of these problems efficiently. In this work, we give conditional running time lower bounds for a large number of scheduling problems, indicating the optimality of some classical algorithms. In particular, we show that the dynamic programming algorithm by Lawler and Moore is probably optimal for $1||\sum w_jU_j$ and $Pm||C_{max}$. Moreover, the FPTAS by Gens and Levner for $1||\sum w_jU_j$ and the algorithm by Lee and Uzsoy for $P2||\sum w_jC_j$ are probably optimal as well. There is still small room for improvement for the $1|Rej\leq Q|\sum w_jU_j$ algorithm by Zhang et al. and the algorithm for $1||\sum T_j$ by Lawler. We also give a lower bound for $P2|any|C_{max}$ and improve the dynamic program by Du and Leung from $\mathcal{O}(nP^2)$ to $\mathcal{O}(nP)$ to match this lower bound. Here, $P$ is the sum of all processing times. The same idea also improves the algorithm for $P3|any|C_{max}$ by Du and Leung from $\mathcal{O}(nP^5)$ to $\mathcal{O}(nP^2)$. The lower bounds in this work all either rely on the (Strong) Exponential Time Hypothesis or the $(\min,+)$-conjecture. While our results suggest the optimality of some classical algorithms, they also motivate future research in cases where the best known algorithms do not quite match the lower bounds.

Motivated by the success of the {\em serial dictatorship} mechanism in social choice settings, we explore its usefulness in tackling various combinatorial optimization problems. We do so by considering an abstract model, in which a set of agents are asked to {\em act} in a particular ordering, called the {\em action sequence}. Each agent acts in a way that gives her the maximum possible value, given the actions of the agents who preceded her in the action sequence. Our goal is to compute action sequences that yield approximately optimal total value to the agents (a.k.a., {\em social welfare}). We assume {\em query access} to the value $v_i(S)$ that the agent i gets when she acts after the agents in the ordered set $S$. We establish tight bounds on the social welfare that can be achieved using polynomially many queries. Even though these bounds show a marginally sublinear approximation of optimal social welfare in general, excellent approximations can be obtained when the valuations stem from an underlying combinatorial domain. Indicatively, when the valuations are defined using bipartite matchings, arborescences in directed graphs, and satisfiability of Boolean expressions, simple query-efficient algorithms yield $2$-approximations. We discuss issues related to truthfulness and show how some of our algorithms can be implemented truthfully using VCG-like payments. Finally, we introduce and study the {\em price of serial dictatorship}, a notion that provides an optimistic measure of the quality of combinatorial optimization solutions generated by action sequences.

Influence maximization is the task of selecting a small number of seed nodes in a social network to maximize the spread of the influence from these seeds, and it has been widely investigated in the past two decades. In the canonical setting, the whole social network as well as its diffusion parameters is given as input. In this paper, we consider the more realistic sampling setting where the network is unknown and we only have a set of passively observed cascades that record the set of activated nodes at each diffusion step. We study the task of influence maximization from these cascade samples (IMS), and present constant approximation algorithms for this task under mild conditions on the seed set distribution. To achieve the optimization goal, we also provide a novel solution to the network inference problem, that is, learning diffusion parameters and the network structure from the cascade data. Comparing with prior solutions, our network inference algorithm requires weaker assumptions and does not rely on maximum-likelihood estimation and convex programming. Our IMS algorithms enhance the learning-and-then-optimization approach by allowing a constant approximation ratio even when the diffusion parameters are hard to learn, and we do not need any assumption related to the network structure or diffusion parameters.

Many problems in areas as diverse as recommendation systems, social network analysis, semantic search, and distributed root cause analysis can be modeled as pattern search on labeled graphs (also called "heterogeneous information networks" or HINs). Given a large graph and a query pattern with node and edge label constraints, a fundamental challenge is to nd the top-k matches ac- cording to a ranking function over edge and node weights. For users, it is di cult to select value k . We therefore propose the novel notion of an any-k ranking algorithm: for a given time budget, re- turn as many of the top-ranked results as possible. Then, given additional time, produce the next lower-ranked results quickly as well. It can be stopped anytime, but may have to continues until all results are returned. This paper focuses on acyclic patterns over arbitrary labeled graphs. We are interested in practical algorithms that effectively exploit (1) properties of heterogeneous networks, in particular selective constraints on labels, and (2) that the users often explore only a fraction of the top-ranked results. Our solution, KARPET, carefully integrates aggressive pruning that leverages the acyclic nature of the query, and incremental guided search. It enables us to prove strong non-trivial time and space guarantees, which is generally considered very hard for this type of graph search problem. Through experimental studies we show that KARPET achieves running times in the order of milliseconds for tree patterns on large networks with millions of nodes and edges.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司