亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Let $E=\{e_1,\ldots,e_n\}$ be a set of $C$-oriented disjoint segments in the plane, where $C$ is a given finite set of orientations that spans the plane, and let $s$ and $t$ be two points. %(We also require that for each orientation in $C$, its opposite orientation is also in $C$.) We seek a minimum-link $C$-oriented tour of $E$, that is, a polygonal path $\pi$ from $s$ to $t$ that visits the segments of $E$ in order, such that, the orientations of its edges are in $C$ and their number is minimum. We present an algorithm for computing such a tour in $O(|C|^2 \cdot n^2)$ time. This problem already captures most of the difficulties occurring in the study of the more general problem, in which $E$ is a set of not-necessarily-disjoint $C$-oriented polygons.

相關內容

We consider the problem of predictive monitoring (PM), i.e., predicting at runtime the satisfaction of a desired property from the current system's state. Due to its relevance for runtime safety assurance and online control, PM methods need to be efficient to enable timely interventions against predicted violations, while providing correctness guarantees. We introduce \textit{quantitative predictive monitoring (QPM)}, the first PM method to support stochastic processes and rich specifications given in Signal Temporal Logic (STL). Unlike most of the existing PM techniques that predict whether or not some property $\phi$ is satisfied, QPM provides a quantitative measure of satisfaction by predicting the quantitative (aka robust) STL semantics of $\phi$. QPM derives prediction intervals that are highly efficient to compute and with probabilistic guarantees, in that the intervals cover with arbitrary probability the STL robustness values relative to the stochastic evolution of the system. To do so, we take a machine-learning approach and leverage recent advances in conformal inference for quantile regression, thereby avoiding expensive Monte-Carlo simulations at runtime to estimate the intervals. We also show how our monitors can be combined in a compositional manner to handle composite formulas, without retraining the predictors nor sacrificing the guarantees. We demonstrate the effectiveness and scalability of QPM over a benchmark of four discrete-time stochastic processes with varying degrees of complexity.

In the average-case $k$-SUM problem, given $r$ integers chosen uniformly at random from $\{0,\ldots,M-1\}$, the objective is to find a set of $k$ numbers that sum to $0$ modulo $M$ (this set is called a solution). In the related $k$-XOR problem, given $k$ uniformly random Boolean vectors of length $\log{M}$, the objective is to find a set of $k$ of them whose bitwise-XOR is the all-zero vector. Both of these problems have widespread applications in the study of fine-grained complexity and cryptanalysis. The feasibility and complexity of these problems depends on the relative values of $k$, $r$, and $M$. The dense regime of $M \leq r^k$, where solutions exist with high probability, is quite well-understood and we have several non-trivial algorithms and hardness conjectures here. Much less is known about the sparse regime of $M\gg r^k$, where solutions are unlikely to exist. The best answers we have for many fundamental questions here are limited to whatever carries over from the dense or worst-case settings. We study the planted $k$-SUM and $k$-XOR problems in the sparse regime. In these problems, a random solution is planted in a randomly generated instance and has to be recovered. As $M$ increases past $r^k$, these planted solutions tend to be the only solutions with increasing probability, potentially becoming easier to find. We show several results about the complexity and applications of these problems, including conditional lower bounds for $r^k \leq M \leq r^{2k}$, a search-to-decision reduction for $M > r^k$, hardness amplification for $M \geq r^k$, a construction of PKE for some $M \leq 2^{\mathrm{polylog}(r)}$, and non-trivial algorithms for any $M \geq 2^{r^2}$.

Spatially correlated data with an excess of zeros, usually referred to as zero-inflated spatial data, arise in many disciplines. Examples include count data, for instance, abundance (or lack thereof) of animal species and disease counts, as well as semi-continuous data like observed precipitation. Spatial two-part models are a flexible class of models for such data. Fitting two-part models can be computationally expensive for large data due to high-dimensional dependent latent variables, costly matrix operations, and slow mixing Markov chains. We describe a flexible, computationally efficient approach for modeling large zero-inflated spatial data using the projection-based intrinsic conditional autoregression (PICAR) framework. We study our approach, which we call PICAR-Z, through extensive simulation studies and two environmental data sets. Our results suggest that PICAR-Z provides accurate predictions while remaining computationally efficient. An important goal of our work is to allow researchers who are not experts in computation to easily build computationally efficient extensions to zero-inflated spatial models; this also allows for a more thorough exploration of modeling choices in two-part models than was previously possible. We show that PICAR-Z is easy to implement and extend in popular probabilistic programming languages such as nimble and stan.

It is known that, for every $k\geq 2$, $C_{2k}$-freeness can be decided by a generic Monte-Carlo algorithm running in $n^{1-1/\Theta(k^2)}$ rounds in the CONGEST model. For $2\leq k\leq 5$, faster Monte-Carlo algorithms do exist, running in $O(n^{1-1/k})$ rounds, based on upper bounding the number of messages to be forwarded, and aborting search sub-routines for which this number exceeds certain thresholds. We investigate the possible extension of these threshold-based algorithms, for the detection of larger cycles. We first show that, for every $k\geq 6$, there exists an infinite family of graphs containing a $2k$-cycle for which any threshold-based algorithm fails to detect that cycle. Hence, in particular, neither $C_{12}$-freeness nor $C_{14}$-freeness can be decided by threshold-based algorithms. Nevertheless, we show that $\{C_{12},C_{14}\}$-freeness can still be decided by a threshold-based algorithm, running in $O(n^{1-1/7})= O(n^{0.857\dots})$ rounds, which is faster than using the generic algorithm, which would run in $O(n^{1-1/22})\simeq O(n^{0.954\dots})$ rounds. Moreover, we exhibit an infinite collection of families of cycles such that threshold-based algorithms can decide $\mathcal{F}$-freeness for every $\mathcal{F}$ in this collection.

The problem of fairly allocating a set of indivisible items is a well-known challenge in the field of (computational) social choice. In this scenario, there is a fundamental incompatibility between notions of fairness (such as envy-freeness and proportionality) and economic efficiency (such as Pareto-optimality). However, in the real world, items are not always allocated once and for all, but often repeatedly. For example, the items may be recurring chores to distribute in a household. Motivated by this, we initiate the study of the repeated fair division of indivisible goods and chores and propose a formal model for this scenario. In this paper, we show that, if the number of repetitions is a multiple of the number of agents, we can always find (i) a sequence of allocations that is envy-free and complete (in polynomial time), and (ii) a sequence of allocations that is proportional and Pareto-optimal (in exponential time). On the other hand, we show that irrespective of the number of repetitions, an envy-free and Pareto-optimal sequence of allocations may not exist. For the case of two agents, we show that if the number of repetitions is even, it is always possible to find a sequence of allocations that is overall envy-free and Pareto-optimal. We then prove even stronger fairness guarantees, showing that every allocation in such a sequence satisfies some relaxation of envy-freeness.

We consider a generalized poset sorting problem (GPS), in which we are given a query graph $G = (V, E)$ and an unknown poset $\mathcal{P}(V, \prec)$ that is defined on the same vertex set $V$, and the goal is to make as few queries as possible to edges in $G$ in order to fully recover $\mathcal{P}$, where each query $(u, v)$ returns the relation between $u, v$, i.e., $u \prec v$, $v \prec u$ or $u \not \sim v$. This generalizes both the poset sorting problem [Faigle et al., SICOMP 88] and the generalized sorting problem [Huang et al., FOCS 11]. We give algorithms with $\tilde{O}(n\cdot \mathrm{poly}(k))$ query complexity when $G$ is a complete bipartite graph or $G$ is stochastic under the \ER model, where $k$ is the \emph{width} of the poset, and these generalize [Daskalakis et al., SICOMP 11] which only studies complete graph $G$. Both results are based on a unified framework that reduces the poset sorting to partitioning the vertices with respect to a given pivot element, which may be of independent interest. Our study of GPS also leads to a new $\tilde{O}(n^{1 - 1 / (2W)})$ competitive ratio for the so-called weighted generalized sorting problem where $W$ is the number of distinct weights in the query graph. This problem was considered as an open question in [Charikar et al., JCSS 02], and our result makes important progress as it yields the first nontrivial $\tilde{O}(n)$ ratio for general weighted query graphs (and better ratio if $W$ is bounded). We obtain this via an $\tilde{O}(nk + n^{1.5})$ query complexity algorithm for the case where every edge in $G$ is guaranteed to be comparable in the poset, which generalizes the state-of-the-art $\tilde{O}(n^{1.5})$ bound for generalized sorting [Huang et al., FOCS 11].

In this work, for a given oriented graph $D$, we study its interval and hull numbers, denoted by ${in}(D)$ and ${hn}(D)$, respectively, in the geodetic, ${P_3}$ and ${P_3^*}$ convexities. This last one, we believe to be formally defined and first studied in this paper, although its undirected version is well-known in the literature. Concerning bounds, for a strongly oriented graph $D$, we prove that ${hn_g}(D)\leq m(D)-n(D)+2$ and that there is a strongly oriented graph such that ${hn_g}(D) = m(D)-n(D)$. We also determine exact values for the hull numbers in these three convexities for tournaments, which imply polynomial-time algorithms to compute them. These results allows us to deduce polynomial-time algorithms to compute ${hn_{P_3}}(D)$ when the underlying graph of $D$ is split or cobipartite. Moreover, we provide a meta-theorem by proving that if deciding whether ${in_g}(D)\leq k$ or ${hn_g}(D)\leq k$ is NP-hard or W[i]-hard parameterized by $k$, for some $i\in\mathbb{Z_+^*}$, then the same holds even if the underlying graph of $D$ is bipartite. Next, we prove that deciding whether ${hn_{P_3}}(D)\leq k$ or ${hn_{P_3^*}}(D)\leq k$ is W[2]-hard parameterized by $k$, even if the underlying graph of $D$ is bipartite; that deciding whether ${in_{P_3}}(D)\leq k$ or ${in_{P_3^*}}(D)\leq k$ is NP-complete, even if $D$ has no directed cycles and the underlying graph of $D$ is a chordal bipartite graph; and that deciding whether ${in_{P_3}}(D)\leq k$ or ${in_{P_3^*}}(D)\leq k$ is W[2]-hard parameterized by $k$, even if the underlying graph of $D$ is split. We also argue that the interval and hull numbers in the oriented $P_3$ and $P_3^*$ convexities can be computed in polynomial time for graphs of bounded tree-width by using Courcelle's theorem.

If $X$ is a subset of vertices of a graph $G$, then vertices $u$ and $v$ are $X$-visible if there exists a shortest $u,v$-path $P$ such that $V(P)\cap X \subseteq \{u,v\}$. If each two vertices from $X$ are $X$-visible, then $X$ is a mutual-visibility set. The mutual-visibility number of $G$ is the cardinality of a largest mutual-visibility set of $G$ and has been already investigated. In this paper a variety of mutual-visibility problems is introduced based on which natural pairs of vertices are required to be $X$-visible. This yields the total, the dual, and the outer mutual-visibility numbers. We first show that these graph invariants are related to each other and to the classical mutual-visibility number, and then we prove that the three newly introduced mutual-visibility problems are computationally difficult. According to this result, we compute or bound their values for several graphs classes that include for instance grid graphs and tori. We conclude the study by presenting some inter-comparison between the values of such parameters, which is based on the computations we made for some specific families.

We are studying the problem of estimating density in a wide range of metric spaces, including the Euclidean space, the sphere, the ball, and various Riemannian manifolds. Our framework involves a metric space with a doubling measure and a self-adjoint operator, whose heat kernel exhibits Gaussian behaviour. We begin by reviewing the construction of kernel density estimators and the related background information. As a novel result, we present a pointwise kernel density estimation for probability density functions that belong to general H\"{o}lder spaces. The study is accompanied by an application in Seismology. Precisely, we analyze a globally-indexed dataset of earthquake occurrence and compare the out-of-sample performance of several approximated kernel density estimators indexed on the sphere.

We improve upon previous oblivious sketching and turnstile streaming results for $\ell_1$ and logistic regression, giving a much smaller sketching dimension achieving $O(1)$-approximation and yielding an efficient optimization problem in the sketch space. Namely, we achieve for any constant $c>0$ a sketching dimension of $\tilde{O}(d^{1+c})$ for $\ell_1$ regression and $\tilde{O}(\mu d^{1+c})$ for logistic regression, where $\mu$ is a standard measure that captures the complexity of compressing the data. For $\ell_1$-regression our sketching dimension is near-linear and improves previous work which either required $\Omega(\log d)$-approximation with this sketching dimension, or required a larger $\operatorname{poly}(d)$ number of rows. Similarly, for logistic regression previous work had worse $\operatorname{poly}(\mu d)$ factors in its sketching dimension. We also give a tradeoff that yields a $1+\varepsilon$ approximation in input sparsity time by increasing the total size to $(d\log(n)/\varepsilon)^{O(1/\varepsilon)}$ for $\ell_1$ and to $(\mu d\log(n)/\varepsilon)^{O(1/\varepsilon)}$ for logistic regression. Finally, we show that our sketch can be extended to approximate a regularized version of logistic regression where the data-dependent regularizer corresponds to the variance of the individual logistic losses.

北京阿比特科技有限公司