亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Computing routing schemes that support both high throughput and low latency is one of the core challenges of network optimization. Such routes can be formalized as $h$-length flows which are defined as flows whose flow paths are restricted to have length at most $h$. Many well-studied algorithmic primitives -- such as maximal and maximum length-constrained disjoint paths -- are special cases of $h$-length flows. Likewise the optimal $h$-length flow is a fundamental quantity in network optimization, characterizing, up to poly-log factors, how quickly a network can accomplish numerous distributed primitives. In this work, we give the first efficient algorithms for computing $(1 - \epsilon)$-approximate $h$-length flows. We give deterministic algorithms that take $\tilde{O}(\text{poly}(h, \frac{1}{\epsilon}))$ parallel time and $\tilde{O}(\text{poly}(h, \frac{1}{\epsilon}) \cdot 2^{O(\sqrt{\log n})})$ distributed CONGEST time. We also give a CONGEST algorithm that succeeds with high probability and only takes $\tilde{O}(\text{poly}(h, \frac{1}{\epsilon}))$ time. Using our $h$-length flow algorithms, we give the first efficient deterministic CONGEST algorithms for the maximal length-constrained disjoint paths problem -- settling an open question of Chang and Saranurak (FOCS 2020) -- as well as essentially-optimal parallel and distributed approximation algorithms for maximum length-constrained disjoint paths. The former greatly simplifies deterministic CONGEST algorithms for computing expander decompositions. We also use our techniques to give the first efficient $(1-\epsilon)$-approximation algorithms for bipartite $b$-matching in CONGEST. Lastly, using our flow algorithms, we give the first algorithms to efficiently compute $h$-length cutmatches, an object at the heart of recent advances in length-constrained expander decompositions.

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國際(ji)網絡會議。 Publisher:IFIP。 SIT:

Submodular maximization has been a central topic in theoretical computer science and combinatorial optimization over the last decades. Plenty of well-performed approximation algorithms have been designed for the problem over a variety of constraints. In this paper, we consider the submodular multiple knapsack problem (SMKP). In SMKP, the profits of each subset of elements are specified by a monotone submodular function. The goal is to find a feasible packing of elements over multiple bins (knapsacks) to maximize the profit. Recently, Fairstein et al.~[ESA20] proposed a nearly optimal $(1-e^{-1}-\epsilon)$-approximation algorithm for SMKP. Their algorithm is obtained by combining configuration LP, a grouping technique for bin packing, and the continuous greedy algorithm for submodular maximization. As a result, the algorithm is somewhat sophisticated and inherently randomized. In this paper, we present an arguably simple deterministic combinatorial algorithm for SMKP, which achieves a $(1-e^{-1}-\epsilon)$-approximation ratio. Our algorithm is based on very different ideas compared with Fairstein et al.~[ESA20].

The average reward criterion is relatively less studied as most existing works in the Reinforcement Learning literature consider the discounted reward criterion. There are few recent works that present on-policy average reward actor-critic algorithms, but average reward off-policy actor-critic is relatively less explored. In this work, we present both on-policy and off-policy deterministic policy gradient theorems for the average reward performance criterion. Using these theorems, we also present an Average Reward Off-Policy Deep Deterministic Policy Gradient (ARO-DDPG) Algorithm. We first show asymptotic convergence analysis using the ODE-based method. Subsequently, we provide a finite time analysis of the resulting stochastic approximation scheme with linear function approximator and obtain an $\epsilon$-optimal stationary policy with a sample complexity of $\Omega(\epsilon^{-2.5})$. We compare the average reward performance of our proposed ARO-DDPG algorithm and observe better empirical performance compared to state-of-the-art on-policy average reward actor-critic algorithms over MuJoCo-based environments.

The central problem we address in this work is estimation of the parameter support set S, the set of indices corresponding to nonzero parameters, in the context of a sparse parametric likelihood model for count-valued multivariate time series. We develop a computationally-intensive algorithm that performs the estimation by aggregating support sets obtained by applying the LASSO to data subsamples. Our approach is to identify several well-fitting candidate models and estimate S by the most frequently-used parameters, thus \textit{aggregating} candidate models rather than selecting a single candidate deemed optimal in some sense. While our method is broadly applicable to any selection problem, we focus on the generalized vector autoregressive model class, and in particular the Poisson case, due to (i) the difficulty of the support estimation problem due to complex dependence in the data, (ii) recent work applying the LASSO in this context, and (iii) interesting applications in network recovery from discrete multivariate time series. We establish benchmark methods based on the LASSO and present empirical results demonstrating the superior performance of our method. Additionally, we present an application estimating ecological interaction networks from paleoclimatology data.

During recent years the interest of optimization and machine learning communities in high-probability convergence of stochastic optimization methods has been growing. One of the main reasons for this is that high-probability complexity bounds are more accurate and less studied than in-expectation ones. However, SOTA high-probability non-asymptotic convergence results are derived under strong assumptions such as the boundedness of the gradient noise variance or of the objective's gradient itself. In this paper, we propose several algorithms with high-probability convergence results under less restrictive assumptions. In particular, we derive new high-probability convergence results under the assumption that the gradient/operator noise has bounded central $\alpha$-th moment for $\alpha \in (1,2]$ in the following setups: (i) smooth non-convex / Polyak-Lojasiewicz / convex / strongly convex / quasi-strongly convex minimization problems, (ii) Lipschitz / star-cocoercive and monotone / quasi-strongly monotone variational inequalities. These results justify the usage of the considered methods for solving problems that do not fit standard functional classes studied in stochastic optimization.

Model checking has been proposed as a formal verification approach for analyzing computer-based and cyber-physical systems. The state space explosion problem is the main obstacle for applying this approach for sophisticated systems. Bisimulation minimization is a prominent method for reducing the number of states in a labeled transition system and is used to alleviate the challenges of the state space explosion problem. For systems with stochastic behaviors, probabilistic bisimulation is used to reduce a given model to its minimized equivalent one. In recent years, several techniques have been proposed to reduce the time complexity of the iterative methods for computing probabilistic bisimulation of stochastic systems with nondeterministic behaviors. In this paper, we propose several techniques to accelerate iterative processes to partition the state space of a given probabilistic model to its bisimulation classes. The first technique applies two ordering heuristics for choosing splitter blocks. The second technique uses hash tables to reduce the running time and the average time complexity of the standard iterative method. The proposed approaches are implemented and run on several conventional case studies and reduce the running time by one order of magnitude on average.

We consider the classic budgeted maximum weight independent set (BMWIS) problem. The input is a graph $G = (V,E)$, a weight function $w:V \rightarrow \mathbb{R}_{\geq 0}$, a cost function $c:V \rightarrow \mathbb{R}_{\geq 0}$, and a budget $B \in \mathbb{R}_{\geq 0}$. The goal is to find an independent set $S \subseteq V$ in $G$ such that $\sum_{v \in S} c(v) \leq B$, which maximizes the total weight $\sum_{v \in S} w(v)$. Since the problem on general graphs cannot be approximated within ratio $|V|^{1-\varepsilon}$ for any $\varepsilon>0$, BMWIS has attracted significant attention on graph families for which a maximum weight independent set can be computed in polynomial time. Two notable such graph families are bipartite and perfect graphs. BMWIS is known to be NP-hard on both of these graph families; however, the best possible approximation guarantees for these graphs are wide open. In this paper, we give a tight $2$-approximation for BMWIS on perfect graphs and bipartite graphs. In particular, we give We a $(2-\varepsilon)$ lower bound for BMWIS on bipartite graphs, already for the special case where the budget is replaced by a cardinality constraint, based on the Small Set Expansion Hypothesis (SSEH). For the upper bound, we design a $2$-approximation for BMWIS on perfect graphs using a Lagrangian relaxation based technique. Finally, we obtain a tight lower bound for the capacitated maximum weight independent set (CMWIS) problem, the special case of BMWIS where $w(v) = c(v)~\forall v \in V$. We show that CMWIS on bipartite and perfect graphs is unlikely to admit an efficient polynomial-time approximation scheme (EPTAS). Thus, the existing PTAS for CMWIS is essentially the best we can expect.

We describe a simple deterministic near-linear time approximation scheme for uncapacitated minimum cost flow in undirected graphs with real edge weights, a problem also known as transshipment. Specifically, our algorithm takes as input a (connected) undirected graph $G = (V, E)$, vertex demands $b \in \mathbb{R}^V$ such that $\sum_{v \in V} b(v) = 0$, positive edge costs $c \in \mathbb{R}_{>0}^E$, and a parameter $\varepsilon > 0$. In $O(\varepsilon^{-2} m \log^{O(1)} n)$ time, it returns a flow $f$ such that the net flow out of each vertex is equal to the vertex's demand and the cost of the flow is within a $(1 + \varepsilon)$ factor of optimal. Our algorithm is combinatorial and has no running time dependency on the demands or edge costs. With the exception of a recent result presented at STOC 2022 for polynomially bounded edge weights, all almost- and near-linear time approximation schemes for transshipment relied on randomization in two main ways: 1) to embed the problem instance into low-dimensional space and 2) to randomly pick target locations to send flow so nearby opposing demands can be satisfied. Our algorithm instead deterministically approximates the cost of routing decisions that would be made if the input were subject to a random tree embedding. To avoid computing the $\Omega(n^2)$ vertex-vertex distances that an approximation of this kind suggests, we also limit the available routing decisions using distances explicitly stored in the well-known Thorup-Zwick distance oracle.

The last decade has seen many attempts to generalise the definition of modes, or MAP estimators, of a probability distribution $\mu$ on a space $X$ to the case that $\mu$ has no continuous Lebesgue density, and in particular to infinite-dimensional Banach and Hilbert spaces $X$. This paper examines the properties of and connections among these definitions. We construct a systematic taxonomy -- or `periodic table' -- of modes that includes the established notions as well as large hitherto-unexplored classes. We establish implications between these definitions and provide counterexamples to distinguish them. We also distinguish those definitions that are merely `grammatically correct' from those that are `meaningful' in the sense of satisfying certain `common-sense' axioms for a mode, among them the correct handling of discrete measures and those with continuous Lebesgue densities. However, despite there being 17 such `meaningful' definitions of mode, we show that none of them satisfy the `merging property', under which the modes of $\mu|_{A}$, $\mu|_{B}$ and $\mu|_{A \cup B}$ enjoy a straightforward relationship for well-separated positive-mass events $A,B \subseteq X$.

Short-range wireless technologies will enable vehicles to communicate and coordinate their actions, thus improving people's safety and traffic efficiency. Whereas IEEE 802.11p (and related standards) had been the only practical solution for years, in 2016 a new option was introduced with Release 14 of long term evolution (LTE), which includes new features to enable direct vehicle-to-vehicle (V2V) communications. LTE-V2V promises a more efficient use of the channel compared to IEEE 802.11p thanks to an improved PHY layer and the use of orthogonal resources at the MAC layer. In LTE-V2V, a key role is played by the resource allocation algorithm and increasing efforts are being made to design new solutions to optimize the spatial reuse.In this context, an important aspect still little studied, is therefore that of identifying references that allow: 1) to have a perception of the space in which the resource allocation algorithms move; and 2) to verify the performance of new proposals. In this work, we focus on a highway scenario and identify two algorithms to be used as a minimum and maximum reference in terms of the packet reception probability (PRP). The PRP is derived as a function of various parameters that describe the scenario and settings, from the application to the physical layer. Results, obtained both in a simplified Poisson point process scenario and with realistic traffic traces, show that the PRP varies considerably with different algorithms and that there is room for the improvement of current solutions.

One of the most studied extensions of the famous Traveling Salesperson Problem (TSP) is the {\sc Multiple TSP}: a set of $m\geq 1$ salespersons collectively traverses a set of $n$ cities by $m$ non-trivial tours, to minimize the total length of their tours. This problem can also be considered to be a variant of {\sc Uncapacitated Vehicle Routing} where the objective function is the sum of all tour lengths. When all $m$ tours start from a single common \emph{depot} $v_0$, then the metric {\sc Multiple TSP} can be approximated equally well as the standard metric TSP, as shown by Frieze (1983). The {\sc Multiple TSP} becomes significantly harder to approximate when there is a \emph{set} $D$ of $d \geq 1$ depots that form the starting and end points of the $m$ tours. For this case only a $(2-1/d)$-approximation in polynomial time is known, as well as a $3/2$-approximation for \emph{constant} $d$ which requires a prohibitive run time of $n^{\Theta(d)}$ (Xu and Rodrigues, \emph{INFORMS J. Comput.}, 2015). A recent work of Traub, Vygen and Zenklusen (STOC 2020) gives another approximation algorithm for {\sc Multiple TSP} running in time $n^{\Theta(d)}$ and reducing the problem to approximating TSP. In this paper we overcome the $n^{\Theta(d)}$ time barrier: we give the first efficient approximation algorithm for {\sc Multiple TSP} with a \emph{variable} number $d$ of depots that yields a better-than-2 approximation. Our algorithm runs in time $(1/\varepsilon)^{\mathcal O(d\log d)}\cdot n^{\mathcal O(1)}$, and produces a $(3/2+\varepsilon)$-approximation with constant probability. For the graphic case, we obtain a deterministic $3/2$-approximation in time $2^d\cdot n^{\mathcal O(1)}$.ithm for metric {\sc Multiple TSP} with run time $n^{\Theta(d)}$, which reduces the problem to approximating metric TSP.

北京阿比特科技有限公司