亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A Stackelberg Vertex Cover game is played on an undirected graph $\mathcal{G}$ where some of the vertices are under the control of a \emph{leader}. The remaining vertices are assigned a fixed weight. The game is played in two stages. First, the leader chooses prices for the vertices under her control. Afterward, the second player, called \emph{follower}, selects a min weight vertex cover in the resulting weighted graph. That is, the follower selects a subset of vertices $C^*$ such that every edge has at least one endpoint in $C^*$ of minimum weight w.r.t.\ to the fixed weights, and the prices set by the leader. Stackelberg Vertex Cover (StackVC) describes the leader's optimization problem to select prices in the first stage of the game so as to maximize her revenue, which is the cumulative price of all her (priceable) vertices that are contained in the follower's solution. Previous research showed that StackVC is \textsf{NP}-hard on bipartite graphs, but solvable in polynomial time in the special case of bipartite graphs, where all priceable vertices belong to the same side of the bipartition. In this paper, we investigate StackVC on paths and present a dynamic program with linear time and space complexity.

相關內容

A universal partial cycle (or upcycle) for $\mathcal{A}^n$ is a cyclic sequence that covers each word of length $n$ over the alphabet $\mathcal{A}$ exactly once -- like a De Bruijn cycle, except that we also allow a wildcard symbol $\mathord{\diamond}$ that can represent any letter of $\mathcal{A}$. Chen et al. in 2017 and Goeckner et al. in 2018 showed that the existence and structure of upcycles are highly constrained, unlike those of De Bruijn cycles, which exist for every alphabet size and word length. Moreover, it was not known whether any upcycles existed for $n \ge 5$. We present several examples of upcycles over both binary and non-binary alphabets for $n = 8$. We generalize two graph-theoretic representations of De Bruijn cycles to upcycles. We then introduce novel approaches to constructing new upcycles from old ones. Notably, given any upcycle for an alphabet of size $a$, we show how to construct an upcycle for an alphabet of size $ak$ for any $k \in \mathbb{N}$, so each example generates an infinite family of upcycles. We also define folds and lifts of upcycles, which relate upcycles with differing densities of $\mathord{\diamond}$ characters. In particular, we show that every upcycle lifts to a De Bruijn cycle. Our constructions rely on a different generalization of De Bruijn cycles known as perfect necklaces, and we introduce several new examples of perfect necklaces. We extend the definitions of certain pseudorandomness properties to partial words and determine which are satisfied by all upcycles, then draw a conclusion about linear feedback shift registers. Finally, we prove new nonexistence results based on the word length $n$, alphabet size, and $\mathord{\diamond}$ density.

We study $\mu_5(n)$, the minimum number of convex pentagons induced by $n$ points in the plane in general position. Despite a significant body of research in understanding $\mu_4(n)$, the variant concerning convex quadrilaterals, not much is known about $\mu_5(n)$. We present two explicit constructions, inspired by point placements obtained through a combination of Stochastic Local Search and a program for realizability of point sets, that provide $\mu_5(n) \leq \binom{\lfloor n/2 \rfloor}{5} + \binom{\lceil n/2 \rceil}{5}$. Furthermore, we conjecture this bound to be optimal, and provide partial evidence by leveraging a MaxSAT encoding that allows us to verify our conjecture for $n \leq 16$.

Consider the triplet $(E, \mathcal{P}, \pi)$, where $E$ is a finite ground set, $\mathcal{P} \subseteq 2^E$ is a collection of subsets of $E$ and $\pi : \mathcal{P} \rightarrow [0,1]$ is a requirement function. Given a vector of marginals $\rho \in [0, 1]^E$, our goal is to find a distribution for a random subset $S \subseteq E$ such that $\operatorname{Pr}[e \in S] = \rho_e$ for all $e \in E$ and $\operatorname{Pr}[P \cap S \neq \emptyset] \geq \pi_P$ for all $P \in \mathcal{P}$, or to determine that no such distribution exists. Generalizing results of Dahan, Amin, and Jaillet, we devise a generic decomposition algorithm that solves the above problem when provided with a suitable sequence of admissible support candidates (ASCs). We show how to construct such ASCs for numerous settings, including supermodular requirements, Hoffman-Schwartz-type lattice polyhedra, and abstract networks where $\pi$ fulfils a conservation law. The resulting algorithm can be carried out efficiently when $\mathcal{P}$ and $\pi$ can be accessed via appropriate oracles. For any system allowing the construction of ASCs, our results imply a simple polyhedral description of the set of marginal vectors for which the decomposition problem is feasible. Finally, we characterize balanced hypergraphs as the systems $(E, \mathcal{P})$ that allow the perfect decomposition of any marginal vector $\rho \in [0,1]^E$, i.e., where we can always find a distribution reaching the highest attainable probability $\operatorname{Pr}[P \cap S \neq \emptyset] = \min \{ \sum_{e \in P} \rho_e, 1\}$ for all $P \in \mathcal{P}$.

Minimum Weight Cycle (MWC) is the problem of finding a simple cycle of minimum weight in a graph $G=(V,E)$. This is a fundamental graph problem with classical sequential algorithms that run in $\tilde{O}(n^3)$ and $\tilde{O}(mn)$ time where $n=|V|$ and $m=|E|$. In recent years this problem has received significant attention in the context of hardness through fine grained sequential complexity as well as in design of faster sequential approximation algorithms. For computing minimum weight cycle in the distributed CONGEST model, near-linear in $n$ lower and upper bounds on round complexity are known for directed graphs (weighted and unweighted), and for undirected weighted graphs; these lower bounds also apply to any $(2-\epsilon)$-approximation algorithm. This paper focuses on round complexity bounds for approximating MWC in the CONGEST model: For coarse approximations we show that for any constant $\alpha >1$, computing an $\alpha$-approximation of MWC requires $\Omega (\frac{\sqrt n}{\log n})$ rounds on weighted undirected graphs and on directed graphs, even if unweighted. We complement these lower bounds with a sublinear $\tilde{O}(n^{2/3}+D)$-round algorithm to compute a $(2+\epsilon)$-approximation of undirected weighted MWC. We also give a $\tilde{O}(n^{4/5}+D)$-round algorithm to compute 2-approximate directed unweighted MWC and $(2+\epsilon)$-approximate directed weighted MWC. To obtain the sublinear round bounds, we design an efficient algorithm for computing $(1+\epsilon)$-approximate shortest paths from $k$ sources in directed and weighted graphs, which is of independent interest. We present an algorithm that runs in $\tilde{O}(\sqrt{nk} + D)$ rounds if $k \ge n^{1/3}$ and $\tilde{O}(\sqrt{nk} + k^{2/5}n^{2/5+o(1)}D^{2/5} + D)$ rounds if $k<n^{1/3}$, which smoothly interpolates between the best known upper bounds for SSSP when $k=1$ and APSP when $k=n$.

We analyze how symmetries can be used to compress structures (also known as interpretations) onto a smaller domain without loss of information. This analysis suggests the possibility to solve satisfiability problems in the compressed domain for better performance. Thus, we propose a 2-step novel method: (i) the sentence to be satisfied is automatically translated into an equisatisfiable sentence over a ``lifted'' vocabulary that allows domain compression; (ii) satisfiability of the lifted sentence is checked by growing the (initially unknown) compressed domain until a satisfying structure is found. The key issue is to ensure that this satisfying structure can always be expanded into an uncompressed structure that satisfies the original sentence to be satisfied. We present an adequate translation for sentences in typed first-order logic extended with aggregates. Our experimental evaluation shows large speedups for generative configuration problems. The method also has applications in the verification of software operating on complex data structures. Further refinements of the translation are left for future work.

The problem of determining whether a graph $G$ contains another graph $H$ as a minor, referred to as the minor containment problem, is a fundamental problem in the field of graph algorithms. While it is NP-complete when $G$ and $H$ are general graphs, it is sometimes tractable on more restricted graph classes. This study focuses on the case where both $G$ and $H$ are trees, known as the tree minor containment problem. Even in this case, the problem is known to be NP-complete. In contrast, polynomial-time algorithms are known for the case when both trees are caterpillars or when the maximum degree of $H$ is a constant. Our research aims to clarify the boundary of tractability and intractability for the tree minor containment problem. Specifically, we provide dichotomies for the computational complexities of the problem based on three structural parameters: the diameter, pathwidth, and path eccentricity.

We provide an algorithm which, with high probability, maintains a $(1-\epsilon)$-approximate maximum flow on an undirected graph undergoing $m$-edge additions in amortized $m^{o(1)} \epsilon^{-3}$ time per update. To obtain this result, we provide a more general algorithm that solves what we call the incremental, thresholded $p$-norm flow problem that asks to determine the first edge-insertion in an undirected graph that causes the minimum $\ell_p$-norm flow to decrease below a given threshold in value. Since we solve this thresholded problem, our data structure succeeds against an adaptive adversary that can only see the data structure's output. Furthermore, since our algorithm holds for $p = 2$, we obtain improved algorithms for dynamically maintaining the effective resistance between a pair of vertices in an undirected graph undergoing edge insertions. Our algorithm builds upon previous dynamic algorithms for approximately solving the minimum-ratio cycle problem that underlie previous advances on the maximum flow problem [Chen-Kyng-Liu-Peng-Probst Gutenberg-Sachdeva, FOCS '22] as well as recent dynamic maximum flow algorithms [v.d.Brand-Liu-Sidford, STOC '23]. Instead of using interior point methods, which were a key component of these recent advances, our algorithm uses an optimization method based on $\ell_p$-norm iterative refinement and the multiplicative weight update method. This ensures a monotonicity property in the minimum-ratio cycle subproblems that allows us to apply known data structures and bypass issues arising from adaptive queries.

In adaptive data analysis, a mechanism gets $n$ i.i.d. samples from an unknown distribution $D$, and is required to provide accurate estimations to a sequence of adaptively chosen statistical queries with respect to $D$. Hardt and Ullman (FOCS 2014) and Steinke and Ullman (COLT 2015) showed that in general, it is computationally hard to answer more than $\Theta(n^2)$ adaptive queries, assuming the existence of one-way functions. However, these negative results strongly rely on an adversarial model that significantly advantages the adversarial analyst over the mechanism, as the analyst, who chooses the adaptive queries, also chooses the underlying distribution $D$. This imbalance raises questions with respect to the applicability of the obtained hardness results -- an analyst who has complete knowledge of the underlying distribution $D$ would have little need, if at all, to issue statistical queries to a mechanism which only holds a finite number of samples from $D$. We consider more restricted adversaries, called \emph{balanced}, where each such adversary consists of two separated algorithms: The \emph{sampler} who is the entity that chooses the distribution and provides the samples to the mechanism, and the \emph{analyst} who chooses the adaptive queries, but has no prior knowledge of the underlying distribution (and hence has no a priori advantage with respect to the mechanism). We improve the quality of previous lower bounds by revisiting them using an efficient \emph{balanced} adversary, under standard public-key cryptography assumptions. We show that these stronger hardness assumptions are unavoidable in the sense that any computationally bounded \emph{balanced} adversary that has the structure of all known attacks, implies the existence of public-key cryptography.

The iterative rational Krylov algorithm (IRKA) is a commonly used fixed-point iteration developed to minimize the $\mathcal{H}_2$ model order reduction error. In this work, IRKA is recast as a Riemannian gradient descent method with a fixed step size over the manifold of rational functions having fixed degree. This interpretation motivates the development of a Riemannian gradient descent method utilizing as a natural extension variable step size and line search. Comparisons made between IRKA and this extension on a few examples demonstrate significant benefits.

It is important to detect anomalous inputs when deploying machine learning systems. The use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples. At the same time, diverse image and text data are available in enormous quantities. We propose leveraging these data to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure (OE). This enables anomaly detectors to generalize and detect unseen anomalies. In extensive experiments on natural language processing and small- and large-scale vision tasks, we find that Outlier Exposure significantly improves detection performance. We also observe that cutting-edge generative models trained on CIFAR-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; we use OE to mitigate this issue. We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance.

北京阿比特科技有限公司