亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The geodesic edge center of a polygon is a point c inside the polygon that minimizes the maximum geodesic distance from c to any edge of the polygon, where geodesic distance is the shortest path distance inside the polygon. We give a linear-time algorithm to find a geodesic edge center of a simple polygon. This improves on the previous O(n log n) time algorithm by Lubiw and Naredla [European Symposium on Algorithms, 2021]. The algorithm builds on an algorithm to find the geodesic vertex center of a simple polygon due to Pollack, Sharir, and Rote [Discrete & Computational Geometry, 1989] and an improvement to linear time by Ahn, Barba, Bose, De Carufel, Korman, and Oh [Discrete & Computational Geometry, 2016]. The geodesic edge center can easily be found from the geodesic farthest-edge Voronoi diagram of the polygon. Finding that Voronoi diagram in linear time is an open question, although the geodesic nearest edge Voronoi diagram (the medial axis) can be found in linear time. As a first step of our geodesic edge center algorithm, we give a linear-time algorithm to find the geodesic farthest-edge Voronoi diagram restricted to the polygon boundary.

相關內容

Modern software development extensively depends on existing libraries written by other developer teams from the same or a different organization. When a developer executes the software, the execution trace may go across the boundaries of multiple software products and create cross-project failures (CPFs). Existing studies show that a stand-alone executable failure report may enable the most effective communication, but creating such a report is often challenging due to the complicated files and dependencies interactions in the software ecosystems. In this paper, to solve the CPF report trilemma, we developed PExReport, which automatically creates stand-alone executable CPF reports. PExReport leverages build tools to prune source code and dependencies, and further analyzes the build process to create a pruned build environment for reproducing the CPF. We performed an evaluation on 74 software project issues with 198 CPFs, and the evaluation results show that PExReport can create executable CPF reports for 184 out of 198 test failures in our dataset, with an average reduction of 72.97% on source classes and the classes in internal JARs.

Many real-world networks, like the Internet, are not the result of central design but instead the outcome of the interaction of local agents who are selfishly optimizing for their individual utility. The famous Network Creation Game [Fabrikant et al., PODC 2003] enables us to understand such processes, their dynamics, and their outcomes in the form of equilibrium states. In this model, agents buy incident edges towards other agents for a price of $\alpha$ and simultaneously try to minimize their buying cost and their total hop distance. Since in many real-world networks, e.g., social networks, consent from both sides is required to maintain a connection, Corbo and Parkes [PODC 2005] proposed a bilateral version of the Network Creation Game, in which mutual consent and payment are required in order to create edges. It is known that the bilateral version has a significantly higher Price of Anarchy, compared to the unilateral version. This is counter-intuitive, since cooperation should help to avoid socially bad states. We investigate this phenomenon by analyzing the Price of Anarchy of the bilateral version with respect to different solution concepts that allow for various degrees of cooperation among the agents. With this, we provide insights into what kind of cooperation is needed to ensure that socially good networks are created. We present a collection of asymptotically tight bounds on the Price of Anarchy that precisely map the impact of cooperation on the quality of tree networks and we find that weak forms of cooperation already yield a significantly improved Price of Anarchy. Moreover, for general networks we show that enhanced cooperation yields close to optimal networks for a wide range of edge prices.

Computing optimal, collision-free trajectories for high-dimensional systems is a challenging problem. Sampling-based planners struggle with the dimensionality, whereas trajectory optimizers may get stuck in local minima due to inherent nonconvexities in the optimization landscape. The use of mixed-integer programming to encapsulate these nonconvexities and find globally optimal trajectories has recently shown great promise, thanks in part to tight convex relaxations and efficient approximation strategies that greatly reduce runtimes. These approaches were previously limited to Euclidean configuration spaces, precluding their use with mobile bases or continuous revolute joints. In this paper, we handle such scenarios by modeling configuration spaces as Riemannian manifolds, and we describe a reduction procedure for the zero-curvature case to a mixed-integer convex optimization problem. We demonstrate our results on various robot platforms, including producing efficient collision-free trajectories for a PR2 bimanual mobile manipulator.

This paper describes a method for fast simplification of surface meshes. Whereas past methods focus on visual appearance, our goal is to solve equations on the surface. Hence, rather than approximate the extrinsic geometry, we construct a coarse intrinsic triangulation of the input domain. In the spirit of the quadric error metric (QEM), we perform greedy decimation while agglomerating global information about approximation error. In lieu of extrinsic quadrics, however, we store intrinsic tangent vectors that track how far curvature "drifts" during simplification. This process also yields a bijective map between the fine and coarse mesh, and prolongation operators for both scalar- and vector-valued data. Moreover, we obtain hard guarantees on element quality via intrinsic retriangulation - a feature unique to the intrinsic setting. The overall payoff is a "black box" approach to geometry processing, which decouples mesh resolution from the size of matrices used to solve equations. We show how our method benefits several fundamental tasks, including geometric multigrid, all-pairs geodesic distance, mean curvature flow, geodesic Voronoi diagrams, and the discrete exponential map.

This paper takes a look at omnibus tests of goodness of fit in the context of reweighted Anderson-Darling tests and makes threefold contributions. The first contribution is to provide a geometric understanding. It is argued that the test statistic with minimum variance for exchangeable distributional deviations can serve as a good general-purpose test. The second contribution is to propose better omnibus tests, called circularly symmetric tests and obtained by circularizing reweighted Anderson-Darling test statistics or, more generally, test statistics based on the observed order statistics. The resulting tests are called circularized tests. A limited but arguably convincing simulation study on finite-sample performance demonstrates that circularized tests have good performance, as they typically outperform their parent methods in the simulation study. The third contribution is to establish new large-sample results.

We study the complexity of reductions for weighted reachability in parametric Markov decision processes. That is, we say a state p is never worse than q if for all valuations of the polynomial indeterminates it is the case that the maximal expected weight that can be reached from p is greater than the same value from q. In terms of computational complexity, we establish that determining whether p is never worse than q is coETR-complete. On the positive side, we give a polynomial-time algorithm to compute the equivalence classes of the order we study for Markov chains. Additionally, we describe and implement two inference rules to under-approximate the never-worse relation and empirically show that it can be used as an efficient preprocessing step for the analysis of large Markov decision processes.

The page number of a directed acyclic graph $G$ is the minimum $k$ for which there is a topological ordering of $G$ and a $k$-coloring of the edges such that no two edges of the same color cross, i.e., have alternating endpoints along the topological ordering. We address the long-standing open problem asking for the largest page number among all upward planar graphs. We improve the best known lower bound to $5$ and present the first asymptotic improvement over the trivial $O(n)$ upper bound, where $n$ denotes the number of vertices in $G$. Specifically, we first prove that the page number of every upward planar graph is bounded in terms of its width, as well as its height. We then combine both approaches to show that every $n$-vertex upward planar graph has page number $O(n^{2/3} \log(n)^{2/3})$.

A random algebraic graph is defined by a group $G$ with a uniform distribution over it and a connection $\sigma:G\longrightarrow[0,1]$ with expectation $p,$ satisfying $\sigma(g)=\sigma(g^{-1}).$ The random graph $\mathsf{RAG}(n,G,p,\sigma)$ with vertex set $[n]$ is formed as follows. First, $n$ independent vectors $x_1,\ldots,x_n$ are sampled uniformly from $G.$ Then, vertices $i,j$ are connected with probability $\sigma(x_ix_j^{-1}).$ This model captures random geometric graphs over the sphere and the hypercube, certain regimes of the stochastic block model, and random subgraphs of Cayley graphs. The main question of interest to the current paper is: when is a random algebraic graph statistically and/or computationally distinguishable from $\mathsf{G}(n,p)$? Our results fall into two categories. 1) Geometric. We focus on the case $G =\{\pm1\}^d$ and use Fourier-analytic tools. For hard threshold connections, we match [LMSY22b] for $p = \omega(1/n)$ and for $1/(r\sqrt{d})$-Lipschitz connections we extend the results of [LR21b] when $d = \Omega(n\log n)$ to the non-monotone setting. We study other connections such as indicators of interval unions and low-degree polynomials. 2) Algebraic. We provide evidence for an exponential statistical-computational gap. Consider any finite group $G$ and let $A\subseteq G$ be a set of elements formed by including each set of the form $\{g, g^{-1}\}$ independently with probability $1/2.$ Let $\Gamma_n(G,A)$ be the distribution of random graphs formed by taking a uniformly random induced subgraph of size $n$ of the Cayley graph $\Gamma(G,A).$ Then, $\Gamma_n(G,A)$ and $\mathsf{G}(n,1/2)$ are statistically indistinguishable with high probability over $A$ if and only if $\log|G|\gtrsim n.$ However, low-degree polynomial tests fail to distinguish $\Gamma_n(G,A)$ and $\mathsf{G}(n,1/2)$ with high probability over $A$ when $\log |G|=\log^{\Omega(1)}n.$

Several sports tournaments contain a round-robin group stage where the teams are assigned to groups subject to some constraints. Since finding an allocation of the teams that satisfies the established criteria is non-trivial, the organisers usually use a computer-assisted random draw to avoid any dead end, a situation when the teams still to be drawn cannot be assigned to the remaining empty slots. However, this procedure is known to be unfair: the feasible allocations are not equally likely. Therefore, we quantify the departure of the 2018 FIFA World Cup draw procedure from an evenly distributed random choice among all valid allocations and evaluate its effect on the probability of qualification for the knockout stage for each nation. The official draw order of Pot 1, Pot 2, Pot 3, Pot 4 turns out to be a significantly better option than the 23 other draw orders with respect to the unwanted distortions. Governing bodies in football are encouraged to make similar calculations immediately before the draw of major sporting events in order to avoid using a highly unfair draw order that can be easily improved by a simple relabelling of the pots.

Point cloud-based large scale place recognition is fundamental for many applications like Simultaneous Localization and Mapping (SLAM). Although many models have been proposed and have achieved good performance by learning short-range local features, long-range contextual properties have often been neglected. Moreover, the model size has also become a bottleneck for their wide applications. To overcome these challenges, we propose a super light-weight network model termed SVT-Net for large scale place recognition. Specifically, on top of the highly efficient 3D Sparse Convolution (SP-Conv), an Atom-based Sparse Voxel Transformer (ASVT) and a Cluster-based Sparse Voxel Transformer (CSVT) are proposed to learn both short-range local features and long-range contextual features in this model. Consisting of ASVT and CSVT, SVT-Net can achieve state-of-the-art on benchmark datasets in terms of both accuracy and speed with a super-light model size (0.9M). Meanwhile, two simplified versions of SVT-Net are introduced, which also achieve state-of-the-art and further reduce the model size to 0.8M and 0.4M respectively.

北京阿比特科技有限公司