亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We introduce the Observation Route Problem ($\textsf{ORP}$) defined as follows: Given a set of $n$ pairwise disjoint compact regions in the plane, find a shortest tour (route) such that an observer walking along this tour can see (observe) some point in each region from some point of the tour. The observer does \emph{not} need to see the entire boundary of an object. The tour is \emph{not} allowed to intersect the interior of any region (i.e., the regions are obstacles and therefore out of bounds). The problem exhibits similarity to both the Traveling Salesman Problem with Neighborhoods ($\textsf{TSPN}$) and the External Watchman Route Problem ($\textsf{EWRP}$). We distinguish two variants: the range of visibility is either limited to a bounding rectangle, or unlimited. We obtain the following results: (I) Given a family of $n$ disjoint convex bodies in the plane, computing a shortest observation route does not admit a $(c\log n)$-approximation unless $\textsf{P} = \textsf{NP}$ for an absolute constant $c>0$. (This holds for both limited and unlimited vision.) (II) Given a family of disjoint convex bodies in the plane, computing a shortest external watchman route is $\textsf{NP}$-hard. (This holds for both limited and unlimited vision; and even for families of axis-aligned squares.) (III) Given a family of $n$ disjoint fat convex polygons, an observation tour whose length is at most $O(\log{n})$ times the optimal can be computed in polynomial time. (This holds for limited vision.) (IV) For every $n \geq 5$, there exists a convex polygon with $n$ sides and all angles obtuse such that its perimeter is \emph{not} a shortest external watchman route. This refutes a conjecture by Absar and Whitesides (2006).

相關內容

Recent diffusion probabilistic models (DPMs) have shown remarkable abilities of generated content, however, they often suffer from complex forward processes, resulting in inefficient solutions for the reversed process and prolonged sampling times. In this paper, we aim to address the aforementioned challenges by focusing on the diffusion process itself that we propose to decouple the intricate diffusion process into two comparatively simpler process to improve the generative efficacy and speed. In particular, we present a novel diffusion paradigm named DDM (Decoupled Diffusion Models) based on the Ito diffusion process, in which the image distribution is approximated by an explicit transition probability while the noise path is controlled by the standard Wiener process. We find that decoupling the diffusion process reduces the learning difficulty and the explicit transition probability improves the generative speed significantly. We prove a new training objective for DPM, which enables the model to learn to predict the noise and image components separately. Moreover, given the novel forward diffusion equation, we derive the reverse denoising formula of DDM that naturally supports fewer steps of generation without ordinary differential equation (ODE) based accelerators. Our experiments demonstrate that DDM outperforms previous DPMs by a large margin in fewer function evaluations setting and gets comparable performances in long function evaluations setting. We also show that our framework can be applied to image-conditioned generation and high-resolution image synthesis, and that it can generate high-quality images with only 10 function evaluations.

The notion of a real-valued function is central to mathematics, computer science, and many other scientific fields. Despite this importance, there are hardly any positive results on decision procedures for predicate logical theories that reason about real-valued functions. This paper defines a first-order predicate language for reasoning about multi-dimensional smooth real-valued functions and their derivatives, and demonstrates that - despite the obvious undecidability barriers - certain positive decidability results for such a language are indeed possible.

A closed quasigeodesic is a closed curve on the surface of a polyhedron with at most $180^\circ$ of surface on both sides at all points; such curves can be locally unfolded straight. In 1949, Pogorelov proved that every convex polyhedron has at least three (non-self-intersecting) closed quasigeodesics, but the proof relies on a nonconstructive topological argument. We present the first finite algorithm to find a closed quasigeodesic on a given convex polyhedron, which is the first positive progress on a 1990 open problem by O'Rourke and Wyman. The algorithm establishes for the first time a quasipolynomial upper bound on the total number of visits to faces (number of line segments), namely, $O\left(\frac{n \, L^3}{\epsilon^2 \, \ell^3}\right)$ where $n$ is the number of vertices of the polyhedron, $\epsilon$ is the minimum curvature of a vertex, $L$ is the length of the longest edge, and $\ell$ is the smallest distance within a face between a vertex and a nonincident edge (minimum feature size of any face). On the real RAM, the algorithm's running time is also pseudopolynomial, namely $O\left(\frac{n \, L^3}{\epsilon^2 \, \ell^3} \log n\right)$. On a word RAM, the running time grows to $O\left(b^2 \cdot \frac{n^8 \log n}{\epsilon^8} \cdot \frac{L^{21}}{\ell^{21}}\cdot 2^{O(|\Lambda|)}\right)$, where $|\Lambda|$ is the number of distinct edge lengths in the polyhedron, assuming its intrinsic or extrinsic geometry is given by rational coordinates each with at most $b$ bits. This time bound remains pseudopolynomial for polyhedra with $O(\log n)$ distinct edges lengths, but is exponential in the worst case. Along the way, we introduce the expression RAM model of computation, formalizing a connection between the real RAM and word RAM hinted at by past work on exact geometric computation.

We consider the problem of computing the Maximal Exact Matches (MEMs) of a given pattern $P[1..m]$ on a large repetitive text collection $T[1..n]$, which is represented as a (hopefully much smaller) run-length context-free grammar of size $g_{rl}$. We show that the problem can be solved in time $O(m^2 \log^\epsilon n)$, for any constant $\epsilon > 0$, on a data structure of size $O(g_{rl})$. Further, on a locally consistent grammar of size $O(\delta\log\frac{n}{\delta})$, the time decreases to $O(m\log m(\log m + \log^\epsilon n))$. The value $\delta$ is a function of the substring complexity of $T$ and $\Omega(\delta\log\frac{n}{\delta})$ is a tight lower bound on the compressibility of repetitive texts $T$, so our structure has optimal size in terms of $n$ and $\delta$. We extend our results to the problem of finding $q$-MEMs, which must appear at least $q$ times in $T$.

We revisit the main result of Carmosino et al \cite{CILM18} which shows that an $\Omega(n^{\omega/2+\epsilon})$ size noncommutative arithmetic circuit size lower bound (where $\omega$ is the matrix multiplication exponent) for a constant-degree $n$-variate polynomial family $(g_n)_n$, where each $g_n$ is a noncommutative polynomial, can be ``lifted'' to an exponential size circuit size lower bound for another polynomial family $(f_n)$ obtained from $(g_n)$ by a lifting process. In this paper, we present a simpler and more conceptual automata-theoretic proof of their result.

We study pseudo-polynomial time algorithms for the fundamental \emph{0-1 Knapsack} problem. In terms of $n$ and $w_{\max}$, previous algorithms for 0-1 Knapsack have cubic time complexities: $O(n^2w_{\max})$ (Bellman 1957), $O(nw_{\max}^2)$ (Kellerer and Pferschy 2004), and $O(n + w_{\max}^3)$ (Polak, Rohwedder, and W\k{e}grzycki 2021). On the other hand, fine-grained complexity only rules out $O((n+w_{\max})^{2-\delta})$ running time, and it is an important question in this area whether $\tilde O(n+w_{\max}^2)$ time is achievable. Our main result makes significant progress towards solving this question: - The 0-1 Knapsack problem has a deterministic algorithm in $\tilde O(n + w_{\max}^{2.5})$ time. Our techniques also apply to the easier \emph{Subset Sum} problem: - The Subset Sum problem has a randomized algorithm in $\tilde O(n + w_{\max}^{1.5})$ time. This improves (and simplifies) the previous $\tilde O(n + w_{\max}^{5/3})$-time algorithm by Polak, Rohwedder, and W\k{e}grzycki (2021) (based on Galil and Margalit (1991), and Bringmann and Wellnitz (2021)). Similar to recent works on Knapsack (and integer programs in general), our algorithms also utilize the \emph{proximity} between optimal integral solutions and fractional solutions. Our new ideas are as follows: - Previous works used an $O(w_{\max})$ proximity bound in the $\ell_1$-norm. As our main conceptual contribution, we use an additive-combinatorial theorem by Erd\H{o}s and S\'{a}rk\"{o}zy (1990) to derive an $\ell_0$-proximity bound of $\tilde O(\sqrt{w_{\max}})$. - Then, the main technical component of our Knapsack result is a dynamic programming algorithm that exploits both $\ell_0$- and $\ell_1$-proximity. It is based on a vast extension of the ``witness propagation'' method, originally designed by Deng, Mao, and Zhong (2023) for the easier \emph{unbounded} setting only.

The Cover Suffix Tree (CST) of a string $T$ is the suffix tree of $T$ with additional explicit nodes corresponding to halves of square substrings of $T$. In the CST an explicit node corresponding to a substring $C$ of $T$ is annotated with two numbers: the number of non-overlapping consecutive occurrences of $C$ and the total number of positions in $T$ that are covered by occurrences of $C$ in $T$. Kociumaka et al. (Algorithmica, 2015) have shown how to compute the CST of a length-$n$ string in $O(n \log n)$ time. We show how to compute the CST in $O(n)$ time assuming that $T$ is over an integer alphabet. Kociumaka et al. (Algorithmica, 2015; Theor. Comput. Sci., 2018) have shown that knowing the CST of a length-$n$ string $T$, one can compute a linear-sized representation of all seeds of $T$ as well as all shortest $\alpha$-partial covers and seeds in $T$ for a given $\alpha$ in $O(n)$ time. Thus our result implies linear-time algorithms computing these notions of quasiperiodicity. The resulting algorithm computing seeds is substantially different from the previous one (Kociumaka et al., SODA 2012, ACM Trans. Algorithms, 2020). Kociumaka et al. (Algorithmica, 2015) proposed an $O(n \log n)$-time algorithm for computing a shortest $\alpha$-partial cover for each $\alpha=1,\ldots,n$; we improve this complexity to $O(n)$. Our results are based on a new characterization of consecutive overlapping occurrences of a substring $S$ of $T$ in terms of the set of runs (see Kolpakov and Kucherov, FOCS 1999) in $T$. This new insight also leads to an $O(n)$-sized index for reporting overlapping consecutive occurrences of a given pattern $P$ of length $m$ in $O(m+output)$ time, where $output$ is the number of occurrences reported. In comparison, a general index for reporting bounded-gap consecutive occurrences of Navarro and Thankachan (Theor. Comput. Sci., 2016) uses $O(n \log n)$ space.

In past decade, previous balanced datasets have been used to advance algorithms for classification, object detection, semantic segmentation, and anomaly detection in industrial applications. Specifically, for condition-based maintenance, automating visual inspection is crucial to ensure high quality. Deterioration prognostic attempts to optimize the fine decision process for predictive maintenance and proactive repair. In civil infrastructure and living environment, damage data mining cannot avoid the imbalanced data issue because of rare unseen events and high quality status by improved operations. For visual inspection, deteriorated class acquired from the surface of concrete and steel components are occasionally imbalanced. From numerous related surveys, we summarize that imbalanced data problems can be categorized into four types; 1) missing range of target and label valuables, 2) majority-minority class imbalance, 3) foreground-background of spatial imbalance, 4) long-tailed class of pixel-wise imbalance. Since 2015, there has been many imbalanced studies using deep learning approaches that includes regression, image classification, object detection, semantic segmentation. However, anomaly detection for imbalanced data is not yet well known. In the study, we highlight one-class anomaly detection application whether anomalous class or not, and demonstrate clear examples on imbalanced vision datasets: blood smear, lung infection, hazardous driving, wooden, concrete deterioration, river sludge, and disaster damage. Illustrated in Fig.1, we provide key results on damage vision mining advantage, hypothesizing that the more effective range of positive ratio, the higher accuracy gain of anomaly detection application. In our imbalanced studies, compared with the balanced case of positive ratio 1/1, we find that there is applicable positive ratio, where the accuracy are consistently high.

2D-based Industrial Anomaly Detection has been widely discussed, however, multimodal industrial anomaly detection based on 3D point clouds and RGB images still has many untouched fields. Existing multimodal industrial anomaly detection methods directly concatenate the multimodal features, which leads to a strong disturbance between features and harms the detection performance. In this paper, we propose Multi-3D-Memory (M3DM), a novel multimodal anomaly detection method with hybrid fusion scheme: firstly, we design an unsupervised feature fusion with patch-wise contrastive learning to encourage the interaction of different modal features; secondly, we use a decision layer fusion with multiple memory banks to avoid loss of information and additional novelty classifiers to make the final decision. We further propose a point feature alignment operation to better align the point cloud and RGB features. Extensive experiments show that our multimodal industrial anomaly detection model outperforms the state-of-the-art (SOTA) methods on both detection and segmentation precision on MVTec-3D AD dataset. Code is available at //github.com/nomewang/M3DM.

We investigate a lattice-structured LSTM model for Chinese NER, which encodes a sequence of input characters as well as all potential words that match a lexicon. Compared with character-based methods, our model explicitly leverages word and word sequence information. Compared with word-based methods, lattice LSTM does not suffer from segmentation errors. Gated recurrent cells allow our model to choose the most relevant characters and words from a sentence for better NER results. Experiments on various datasets show that lattice LSTM outperforms both word-based and character-based LSTM baselines, achieving the best results.

北京阿比特科技有限公司