亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we carefully combine Fredman's trick [SICOMP'76] and Matou\v{s}ek's approach for dominance product [IPL'91] to obtain powerful results in fine-grained complexity: - Under the hypothesis that APSP for undirected graphs with edge weights in $\{1, 2, \ldots, n\}$ requires $n^{3-o(1)}$ time (when $\omega=2$), we show a variety of conditional lower bounds, including an $n^{7/3-o(1)}$ lower bound for unweighted directed APSP and an $n^{2.2-o(1)}$ lower bound for computing the Minimum Witness Product between two $n \times n$ Boolean matrices, even if $\omega=2$, improving upon their trivial $n^2$ lower bounds. Our techniques can also be used to reduce the unweighted directed APSP problem to other problems. In particular, we show that (when $\omega = 2$), if unweighted directed APSP requires $n^{2.5-o(1)}$ time, then Minimum Witness Product requires $n^{7/3-o(1)}$ time. - We show that, surprisingly, many central problems in fine-grained complexity are equivalent to their natural counting versions. In particular, we show that Min-Plus Product and Exact Triangle are subcubically equivalent to their counting versions, and 3SUM is subquadratically equivalent to its counting version. - We obtain new algorithms using new variants of the Balog-Szemer\'edi-Gowers theorem from additive combinatorics. For example, we get an $O(n^{3.83})$ time deterministic algorithm for exactly counting the number of shortest paths in an arbitrary weighted graph, improving the textbook $\widetilde{O}(n^{4})$ time algorithm. We also get faster algorithms for 3SUM in preprocessed universes, and deterministic algorithms for 3SUM on monotone sets in $\{1, 2, \ldots, n\}^d$.

相關內容

One-to-one set matching is a key design for DETR to establish its end-to-end capability, so that object detection does not require a hand-crafted NMS (non-maximum suppression) to remove duplicate detections. This end-to-end signature is important for the versatility of DETR, and it has been generalized to broader vision tasks. However, we note that there are few queries assigned as positive samples and the one-to-one set matching significantly reduces the training efficacy of positive samples. We propose a simple yet effective method based on a hybrid matching scheme that combines the original one-to-one matching branch with an auxiliary one-to-many matching branch during training. Our hybrid strategy has been shown to significantly improve accuracy. In inference, only the original one-to-one match branch is used, thus maintaining the end-to-end merit and the same inference efficiency of DETR. The method is named H-DETR, and it shows that a wide range of representative DETR methods can be consistently improved across a wide range of visual tasks, including DeformableDETR, PETRv2, PETR, and TransTrack, among others. The code is available at: //github.com/HDETR

The problem of model counting, also known as #SAT, is to compute the number of models or satisfying assignments of a given Boolean formula $F$. Model counting is a fundamental problem in computer science with a wide range of applications. In recent years, there has been a growing interest in using hashing-based techniques for approximate model counting that provide $(\varepsilon, \delta)$-guarantees: i.e., the count returned is within a $(1+\varepsilon)$-factor of the exact count with confidence at least $1-\delta$. While hashing-based techniques attain reasonable scalability for large enough values of $\delta$, their scalability is severely impacted for smaller values of $\delta$, thereby preventing their adoption in application domains that require estimates with high confidence. The primary contribution of this paper is to address the Achilles heel of hashing-based techniques: we propose a novel approach based on rounding that allows us to achieve a significant reduction in runtime for smaller values of $\delta$. The resulting counter, called RoundMC, achieves a substantial runtime performance improvement over the current state-of-the-art counter, ApproxMC. In particular, our extensive evaluation over a benchmark suite consisting of 1890 instances shows that RoundMC solves 204 more instances than ApproxMC, and achieves a $4\times$ speedup over ApproxMC.

Given a set $P$ of $n$ weighted points and a set $S$ of $m$ disks in the plane, the hitting set problem is to compute a subset $P'$ of points of $P$ such that each disk contains at least one point of $P'$ and the total weight of all points of $P'$ is minimized. The problem is known to be NP-hard. In this paper, we consider a line-constrained version of the problem in which all disks are centered on a line $\ell$. We present an $O((m+n)\log(m+n)+\kappa \log m)$ time algorithm for the problem, where $\kappa$ is the number of pairs of disks that intersect. For the unit-disk case where all disks have the same radius, the running time can be reduced to $O((n + m)\log(m + n))$. In addition, we solve the problem in $O((m + n)\log(m + n))$ time in the $L_{\infty}$ and $L_1$ metrics, in which a disk is a square and a diamond, respectively. Our techniques can also be used to solve other geometric hitting set problems. For example, given in the plane a set $P$ of $n$ weighted points and a set $S$ of $n$ half-planes, we solve in $O(n^4\log n)$ time the problem of finding a minimum weight hitting set of $P$ for $S$. This improves the previous best algorithm of $O(n^6)$ time by nearly a quadratic factor.

Local search is a powerful heuristic in optimization and computer science, the complexity of which was studied in the white box and black box models. In the black box model, we are given a graph $G = (V,E)$ and oracle access to a function $f : V \to \mathbb{R}$. The local search problem is to find a vertex $v$ that is a local minimum, i.e. with $f(v) \leq f(u)$ for all $(u,v) \in E$, using as few queries as possible. The query complexity is well understood on the grid and the hypercube, but much less is known beyond. We show the query complexity of local search on $d$-regular expanders with constant degree is $\Omega\left(\frac{\sqrt{n}}{\log{n}}\right)$, where $n$ is the number of vertices. This matches within a logarithmic factor the upper bound of $O(\sqrt{n})$ for constant degree graphs from Aldous (1983), implying that steepest descent with a warm start is an essentially optimal algorithm for expanders. The best lower bound known from prior work was $\Omega\left(\frac{\sqrt[8]{n}}{\log{n}}\right)$, shown by Santha and Szegedy (2004) for quantum and randomized algorithms. We obtain this result by considering a broader framework of graph features such as vertex congestion and separation number. We show that for each graph, the randomized query complexity of local search is $\Omega\left(\frac{n^{1.5}}{g}\right)$, where $g$ is the vertex congestion of the graph; and $\Omega\left(\sqrt[4]{\frac{s}{\Delta}}\right)$, where $s$ is the separation number and $\Delta$ is the maximum degree. For separation number the previous bound was $\Omega\left(\sqrt[8]{\frac{s}{\Delta}} /\log{n}\right)$, given by Santha and Szegedy for quantum and randomized algorithms. We also show a variant of the relational adversary method from Aaronson (2006), which is asymptotically at least as strong as the version in Aaronson (2006) for all randomized algorithms and strictly stronger for some problems.

This paper proposes a novel method for estimating the set of plausible poses of a rigid object from a set of points with volumetric information, such as whether each point is in free space or on the surface of the object. In particular, we study how pose can be estimated from force and tactile data arising from contact. Using data derived from contact is challenging because it is inherently less information-dense than visual data, and thus the pose estimation problem is severely under-constrained when there are few contacts. Rather than attempting to estimate the true pose of the object, which is not tractable without a large number of contacts, we seek to estimate a plausible set of poses which obey the constraints imposed by the sensor data. Existing methods struggle to estimate this set because they are either designed for single pose estimates or require informative priors to be effective. Our approach to this problem, Constrained pose Hypothesis Set Elimination (CHSEL), has three key attributes: 1) It considers volumetric information, which allows us to account for known free space; 2) It uses a novel differentiable volumetric cost function to take advantage of powerful gradient-based optimization tools; and 3) It uses methods from the Quality Diversity (QD) optimization literature to produce a diverse set of high-quality poses. To our knowledge, QD methods have not been used previously for pose registration. We also show how to update our plausible pose estimates online as more data is gathered by the robot. Our experiments suggest that CHSEL shows large performance improvements over several baseline methods for both simulated and real-world data.

Given data ${\rm X}\in\mathbb{R}^{n\times d}$ and labels $\mathbf{y}\in\mathbb{R}^{n}$ the goal is find $\mathbf{w}\in\mathbb{R}^d$ to minimize $\Vert{\rm X}\mathbf{w}-\mathbf{y}\Vert^2$. We give a polynomial algorithm that, \emph{oblivious to $\mathbf{y}$}, throws out $n/(d+\sqrt{n})$ data points and is a $(1+d/n)$-approximation to optimal in expectation. The motivation is tight approximation with reduced label complexity (number of labels revealed). We reduce label complexity by $\Omega(\sqrt{n})$. Open question: Can label complexity be reduced by $\Omega(n)$ with tight $(1+d/n)$-approximation?

Breast cancer early detection is crucial for improving patient outcomes. The Institut Catal\`a de la Salut (ICS) has launched the DigiPatICS project to develop and implement artificial intelligence algorithms to assist with the diagnosis of cancer. In this paper, we propose a new approach for facing the color normalization problem in HER2-stained histopathological images of breast cancer tissue, posed as an style transfer problem. We combine the Color Deconvolution technique with the Pix2Pix GAN network to present a novel approach to correct the color variations between different HER2 stain brands. Our approach focuses on maintaining the HER2 score of the cells in the transformed images, which is crucial for the HER2 analysis. Results demonstrate that our final model outperforms the state-of-the-art image style transfer methods in maintaining the cell classes in the transformed images and is as effective as them in generating realistic images.

We present a novel linearizable wait-free queue implementation using single-word CAS instructions. Previous lock-free queue implementations from CAS all have amortized step complexity of $\Omega(p)$ per operation in worst-case executions, where $p$ is the number of processes that access the queue. Our new wait-free queue takes $O(\log p)$ steps per enqueue and $O(\log^2 p +\log q)$ steps per dequeue, where $q$ is the size of the queue. A bounded-space version of the implementation has $O(\log p \log(p+q))$ amortized step complexity per operation.

In this article, we will look at autoencoders. This article covers the mathematics and the fundamental concepts of autoencoders. We will discuss what they are, what the limitations are, the typical use cases, and we will look at some examples. We will start with a general introduction to autoencoders, and we will discuss the role of the activation function in the output layer and the loss function. We will then discuss what the reconstruction error is. Finally, we will look at typical applications as dimensionality reduction, classification, denoising, and anomaly detection. This paper contains the notes of a PhD-level lecture on autoencoders given in 2021.

Attributed graph clustering is challenging as it requires joint modelling of graph structures and node attributes. Recent progress on graph convolutional networks has proved that graph convolution is effective in combining structural and content information, and several recent methods based on it have achieved promising clustering performance on some real attributed networks. However, there is limited understanding of how graph convolution affects clustering performance and how to properly use it to optimize performance for different graphs. Existing methods essentially use graph convolution of a fixed and low order that only takes into account neighbours within a few hops of each node, which underutilizes node relations and ignores the diversity of graphs. In this paper, we propose an adaptive graph convolution method for attributed graph clustering that exploits high-order graph convolution to capture global cluster structure and adaptively selects the appropriate order for different graphs. We establish the validity of our method by theoretical analysis and extensive experiments on benchmark datasets. Empirical results show that our method compares favourably with state-of-the-art methods.

北京阿比特科技有限公司