亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Voronoi diagrams technique was introduced by Cabello to compute the diameter of planar graphs in subquadratic time. We present novel applications of this technique in static, fault-tolerant, and partially-dynamic undirected unweighted planar graphs, as well as some new limitations. 1. In the static case, we give $n^{3+o(1)}/D^2$ and $\tilde{O}(n\cdot D^2)$ time algorithms for computing the diameter of a planar graph $G$ with diameter $D$. These are faster than the state of the art $\tilde{O}(n^{5/3})$ when $D<n^{1/3}$ or $D>n^{2/3}$. 2. In the fault-tolerant setting, we give an $n^{7/3+o(1)}$ time algorithm for computing the diameter of $G\setminus \{e\}$ for every edge $e$ in $G$ the replacement diameter problem. Compared to the naive $\tilde{O}(n^{8/3})$ time algorithm that runs the static algorithm for every edge. 3. In the incremental setting, where we wish to maintain the diameter while while adding edges, we present an algorithm with total running time $n^{7/3+o(1)}$. Compared to the naive $\tilde{O}(n^{8/3})$ time algorithm that runs the static algorithm after every update. 4. We give a lower bound (conditioned on the SETH) ruling out an amortized $O(n^{1-\varepsilon})$ update time for maintaining the diameter in *weighted* planar graph. The lower bound holds even for incremental or decremental updates. Our upper bounds are obtained by novel uses and manipulations of Voronoi diagrams. These include maintaining the Voronoi diagram when edges of the graph are deleted, allowing the sites of the Voronoi diagram to lie on a BFS tree level (rather than on boundaries of $r$-division), and a new reduction from incremental diameter to incremental distance oracles that could be of interest beyond planar graphs. Our lower bound is the first lower bound for a dynamic planar graph problem that is conditioned on the SETH.

相關內容

We present near-optimal algorithms for detecting small vertex cuts in the CONGEST model of distributed computing. Despite extensive research in this area, our understanding of the vertex connectivity of a graph is still incomplete, especially in the distributed setting. To this date, all distributed algorithms for detecting cut vertices suffer from an inherent dependency in the maximum degree of the graph, $\Delta$. Hence, in particular, there is no truly sub-linear time algorithm for this problem, not even for detecting a single cut vertex. We take a new algorithmic approach for vertex connectivity which allows us to bypass the existing $\Delta$ barrier. As a warm-up to our approach, we show a simple $\widetilde{O}(D)$-round randomized algorithm for computing all cut vertices in a $D$-diameter $n$-vertex graph. This improves upon the $O(D+\Delta/\log n)$-round algorithm of [Pritchard and Thurimella, ICALP 2008]. Our key technical contribution is an $\widetilde{O}(D)$-round randomized algorithm for computing all cut pairs in the graph, improving upon the state-of-the-art $O(\Delta \cdot D)^4$-round algorithm by [Parter, DISC '19]. Note that even for the considerably simpler setting of edge cuts, currently $\widetilde{O}(D)$-round algorithms are known only for detecting pairs of cut edges. Our approach is based on employing the well-known linear graph sketching technique [Ahn, Guha and McGregor, SODA 2012] along with the heavy-light tree decomposition of [Sleator and Tarjan, STOC 1981]. Combining this with a careful characterization of the survivable subgraphs, allows us to determine the connectivity of $G \setminus \{x,y\}$ for every pair $x,y \in V$, using $\widetilde{O}(D)$-rounds. We believe that the tools provided in this paper are useful for omitting the $\Delta$-dependency even for larger cut values.

We solve acoustic scattering problems by means of the isogeometric boundary integral equation method. In order to avoid spurious modes, we apply the combined field integral equations for either sound-hard scatterers or sound-soft scatterers. These integral equations are discretized by Galerkin's method, which especially enables the mathematically correct regularization of the hypersingular integral operator. In order to circumvent densely populated system matrices, we employ the isogeometric fast multipole method. The result is an algorithm that scales essentially linear in the number of boundary elements. Numerical experiments are performed which show the feasibility and the performance of the approach.

We establish globally optimal solutions to a class of fractional optimization problems on a class of constraint sets, whose key characteristics are as follows: 1) The numerator and the denominator of the objective function are both convex, semi-algebraic, Lipschitz continuous and differentiable with Lipschitz continuous gradients on the constraint set. 2) The constraint set is closed, convex and semi-algebraic. Compared with Dinkelbach's approach, our novelty falls into the following aspects: 1) Dinkelbach's has to solve a concave maximization problem in each iteration, which is nontrivial to obtain a solution, while ours only needs to conduct one proximity gradient operation in each iteration. 2) Dinkelbach's requires at least one nonnegative point for the numerator to proceed the algorithm, but ours does not, which is available to a much wider class of situations. 3) Dinkelbach's requires a closed and bounded constraint set, while ours only needs the closedness but not necessarily the boundedness. Therefore, our approach is viable for many more practical models, like optimizing the Sharpe ratio (SR) or the Information ratio in mathematical finance. Numerical experiments show that our approach achieves the ground-truth solutions in two simple examples. For real-world financial data, it outperforms several existing approaches for SR maximization.

Multiple algorithms are known for efficiently calculating the prefix probability of a string under a probabilistic context-free grammar (PCFG). Good algorithms for the problem have a runtime cubic in the length of the input string. However, some proposed algorithms are suboptimal with respect to the size of the grammar. This paper proposes a novel speed-up of Jelinek and Lafferty's (1991) algorithm, which runs in $O(n^3 |N|^3 + |N|^4)$, where $n$ is the input length and $|N|$ is the number of non-terminals in the grammar. In contrast, our speed-up runs in $O(n^2 |N|^3+n^3|N|^2)$.

In 1975 the first author proved that every finite tight two-person game form $g$ is Nash-solvable, that is, for every payoffs $u$ and $w$ of two players the obtained game $(g;u,w)$, in normal form, has a Nash equilibrium (NE) in pure strategies. This result was extended in several directions; here we strengthen it further. We construct two special NE realized by a lexicographically safe (lexsafe) strategy of one player and a best response of the other. We obtain a polynomial algorithm computing these lexsafe NE. This is trivial when game form $g$ is given explicitly. Yet, in applications $g$ is frequently realized by an oracle $\cO$ such that size of $g$ is exponential in size $|\cO|$ of $\cO$. We assume that game form $g = g(\cO)$ generated by $\cO$ is tight and that an arbitrary {\em win-lose game} $(g;u,w)$ (in which payoffs $u$ and $w$ are zero-sum and take only values $\pm 1$) can be solved, in time polynomial in $|\cO|$. These assumptions allow us to construct an algorithm computing two (one for each player) lexsafe NE in time polynomial in $|\cO|$. We consider four types of oracles known in the literature and show that all four satisfy the above assumptions.

Consider a two-person zero-sum search game between a Hider and a Searcher. The Hider chooses to hide in one of $n$ discrete locations (or "boxes") and the Searcher chooses a search sequence specifying which order to look in these boxes until finding the Hider. A search at box $i$ takes $t_i$ time units and finds the Hider - if hidden there - independently with probability $q_i$, for $i=1,\ldots,n$. The Searcher wants to minimize the expected total time needed to find the Hider, while the Hider wants to maximize it. It is shown in the literature that the Searcher has an optimal search strategy that mixes up to $n$ distinct search sequences with appropriate probabilities. This paper investigates the existence of optimal pure strategies for the Searcher - a single deterministic search sequence that achieves the optimal expected total search time regardless of where the Hider hides. We identify several cases in which the Searcher has an optimal pure strategy, and several cases in which such optimal pure strategy does not exist. An optimal pure search strategy has significant practical value because the Searcher does not need to randomize their actions and will avoid second guessing themselves if the chosen search sequence from an optimal mixed strategy does not turn out well.

We present a fully polynomial-time approximation scheme (FPTAS) for computing equilibria in congestion games, under \emph{smoothed} running-time analysis. More precisely, we prove that if the resource costs of a congestion game are randomly perturbed by independent noises, whose density is at most $\phi$, then \emph{any} sequence of $(1+\varepsilon)$-improving dynamics will reach an $(1+\varepsilon)$-approximate pure Nash equilibrium (PNE) after an expected number of steps which is strongly polynomial in $\frac{1}{\varepsilon}$, $\phi$, and the size of the game's description. Our results establish a sharp contrast to the traditional worst-case analysis setting, where it is known that better-response dynamics take exponentially long to converge to $\alpha$-approximate PNE, for any constant factor $\alpha\geq 1$. As a matter of fact, computing $\alpha$-approximate PNE in congestion games is PLS-hard. We demonstrate how our analysis can be applied to various different models of congestion games including general, step-function, and polynomial cost, as well as fair cost-sharing games (where the resource costs are decreasing). It is important to note that our bounds do not depend explicitly on the cardinality of the players' strategy sets, and thus the smoothed FPTAS is readily applicable to network congestion games as well.

In this study, we examine numerical approximations for 2nd-order linear-nonlinear differential equations with diverse boundary conditions, followed by the residual corrections of the first approximations. We first obtain numerical results using the Galerkin weighted residual approach with Bernstein polynomials. The generation of residuals is brought on by the fact that our first approximation is computed using numerical methods. To minimize these residuals, we use the compact finite difference scheme of 4th-order convergence to solve the error differential equations in accordance with the error boundary conditions. We also introduce the formulation of the compact finite difference method of fourth-order convergence for the nonlinear BVPs. The improved approximations are produced by adding the error values derived from the approximations of the error differential equation to the weighted residual values. Numerical results are compared to the exact solutions and to the solutions available in the published literature to validate the proposed scheme, and high accuracy is achieved in all cases

We consider the problem of answering connectivity queries on a real algebraic curve. The curve is given as the real trace of an algebraic curve, assumed to be in generic position, and being defined by some rational parametrizations. The query points are given by a zero-dimensional parametrization. We design an algorithm which counts the number of connected components of the real curve under study, and decides which query point lie in which connected component, in time log-linear in $N^6$, where $N$ is the maximum of the degrees and coefficient bit-sizes of the polynomials given as input. This matches the currently best-known bound for computing the topology of real plane curves. The main novelty of this algorithm is the avoidance of the computation of the complete topology of the curve.

Multi-agent influence diagrams (MAIDs) are a popular form of graphical model that, for certain classes of games, have been shown to offer key complexity and explainability advantages over traditional extensive form game (EFG) representations. In this paper, we extend previous work on MAIDs by introducing the concept of a MAID subgame, as well as subgame perfect and trembling hand perfect equilibrium refinements. We then prove several equivalence results between MAIDs and EFGs. Finally, we describe an open source implementation for reasoning about MAIDs and computing their equilibria.

北京阿比特科技有限公司