亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Let $P$ be a set of at most $n$ points and let $R$ be a set of at most $n$ geometric ranges, such as for example disks or rectangles, where each $p \in P$ has an associated supply $s_{p} > 0$, and each $r \in R$ has an associated demand $d_{r} > 0$. An assignment is a set $\mathcal{A}$ of ordered triples $(p,r,a_{pr}) \in P \times R \times \mathbb{R}_{>0}$ such that $p \in r$. We show how to compute a maximum assignment that satisfies the constraints given by the supplies and demands. Using our techniques, we can also solve minimum bottleneck problems, such as computing a perfect matching between a set of $n$ red points~$P$ and $n$ blue points $Q$ that minimizes the length of the longest edge. For the $L_\infty$-metric, we can do this in time $O(n^{1+\varepsilon})$ in any fixed dimension, for the $L_2$-metric in the plane in time $O(n^{4/3 + \varepsilon})$, for any $\varepsilon > 0$.

相關內容

A random $m\times n$ matrix $S$ is an oblivious subspace embedding (OSE) with parameters $\epsilon>0$, $\delta\in(0,1/3)$ and $d\leq m\leq n$, if for any $d$-dimensional subspace $W\subseteq R^n$, $P\big(\,\forall_{x\in W}\ (1+\epsilon)^{-1}\|x\|\leq\|Sx\|\leq (1+\epsilon)\|x\|\,\big)\geq 1-\delta.$ It is known that the embedding dimension of an OSE must satisfy $m\geq d$, and for any $\theta > 0$, a Gaussian embedding matrix with $m\geq (1+\theta) d$ is an OSE with $\epsilon = O_\theta(1)$. However, such optimal embedding dimension is not known for other embeddings. Of particular interest are sparse OSEs, having $s\ll m$ non-zeros per column, with applications to problems such as least squares regression and low-rank approximation. We show that, given any $\theta > 0$, an $m\times n$ random matrix $S$ with $m\geq (1+\theta)d$ consisting of randomly sparsified $\pm1/\sqrt s$ entries and having $s= O(\log^4(d))$ non-zeros per column, is an oblivious subspace embedding with $\epsilon = O_{\theta}(1)$. Our result addresses the main open question posed by Nelson and Nguyen (FOCS 2013), who conjectured that sparse OSEs can achieve $m=O(d)$ embedding dimension, and it improves on $m=O(d\log(d))$ shown by Cohen (SODA 2016). We use this to construct the first oblivious subspace embedding with $O(d)$ embedding dimension that can be applied faster than current matrix multiplication time, and to obtain an optimal single-pass algorithm for least squares regression. We further extend our results to construct even sparser non-oblivious embeddings, leading to the first subspace embedding with low distortion $\epsilon=o(1)$ and optimal embedding dimension $m=O(d/\epsilon^2)$ that can be applied in current matrix multiplication time.

A large number of real and abstract systems involve the transformation of some basic resource into respective products under the action of multiple processing agents, which can be understood as multiple-agent production systems (MAP). At each discrete time instant, for each agent, a fraction of the resources is assumed to be kept, forwarded to other agents, or converted into work with some efficiency. The present work describes a systematic study of nine basic MAP architectures subdivided into two main groups, namely parallel and sequential distribution of resources from a single respective source. Several types of interconnections among the involved processing agents are also considered. The resulting MAP architectures are studied in terms of the total amount of work, the dispersion of the resources (states) among the agents, and the transition times from the start of operation until the respective steady state. Several interesting results are obtained and discussed, including the observation that some of the parallel designs were able to yield maximum work and minimum state dispersion, achieved at the expense of the transition time and use of several interconnections between the source and the agents. The results obtained for the sequential designs indicate that relatively high performance can be obtained for some specific cases.

Voxel-based segmentation volumes often store a large number of labels and voxels, and the resulting amount of data can make storage, transfer, and interactive visualization difficult. We present a lossless compression technique which addresses these challenges. It processes individual small bricks of a segmentation volume and compactly encodes the labelled regions and their boundaries by an iterative refinement scheme. The result for each brick is a list of labels, and a sequence of operations to reconstruct the brick which is further compressed using rANS-entropy coding. As the relative frequencies of operations are very similar across bricks, the entropy coding can use global frequency tables for an entire data set which enables efficient and effective parallel (de)compression. Our technique achieves high throughput (up to gigabytes per second both for compression and decompression) and strong compression ratios of about 1% to 3% of the original data set size while being applicable to GPU-based rendering. We evaluate our method for various data sets from different fields and demonstrate GPU-based volume visualization with on-the-fly decompression, level-of-detail rendering (with optional on-demand streaming of detail coefficients to the GPU), and a caching strategy for decompressed bricks for further performance improvement.

We present algorithms that compute the terminal configurations for sandpile instances in $O(n \log n)$ time on trees and $O(n)$ time on paths, where $n$ is the number of vertices. The Abelian Sandpile model is a well-known model used in exploring self-organized criticality. Despite a large amount of work on other aspects of sandpiles, there have been limited results in efficiently computing the terminal state, known as the sandpile prediction problem. Our algorithm improves the previous best runtime of $O(n \log^5 n)$ on trees [Ramachandran-Schild SODA '17] and $O(n \log n)$ on paths [Moore-Nilsson '99]. To do so, we move beyond the simulation of individual events by directly computing the number of firings for each vertex. The computation is accelerated using splittable binary search trees. In addition, we give algorithms in $O(n)$ time on cliques and $O(n \log^2 n)$ time on pseudotrees. Towards solving on general graphs, we provide a reduction that transforms the prediction problem on an arbitrary graph into problems on its subgraphs separated by any vertex set $P$. The reduction gives a time complexity of $O(\log^{|P|} n \cdot T)$ where $T$ denotes the total time for solving on each subgraph. We also give algorithms that works well with this reduction scheme.

Chan, Har-Peled, and Jones [SICOMP 2020] developed locality-sensitive orderings (LSO) for Euclidean space. A $(\tau,\rho)$-LSO is a collection $\Sigma$ of orderings such that for every $x,y\in\mathbb{R}^d$ there is an ordering $\sigma\in\Sigma$, where all the points between $x$ and $y$ w.r.t. $\sigma$ are in the $\rho$-neighborhood of either $x$ or $y$. In essence, LSO allow one to reduce problems to the $1$-dimensional line. Later, Filtser and Le [STOC 2022] developed LSO's for doubling metrics, general metric spaces, and minor free graphs. For Euclidean and doubling spaces, the number of orderings in the LSO is exponential in the dimension, which made them mainly useful for the low dimensional regime. In this paper, we develop new LSO's for Euclidean, $\ell_p$, and doubling spaces that allow us to trade larger stretch for a much smaller number of orderings. We then use our new LSO's (as well as the previous ones) to construct path reporting low hop spanners, fault tolerant spanners, reliable spanners, and light spanners for different metric spaces. While many nearest neighbor search (NNS) data structures were constructed for metric spaces with implicit distance representations (where the distance between two metric points can be computed using their names, e.g. Euclidean space), for other spaces almost nothing is known. In this paper we initiate the study of the labeled NNS problem, where one is allowed to artificially assign labels (short names) to metric points. We use LSO's to construct efficient labeled NNS data structures in this model.

We show that the problem of whether a query is equivalent to a query of tree-width $k$ is decidable, for the class of Unions of Conjunctive Regular Path Queries with two-way navigation (UC2RPQs). A previous result by Barcel\'o, Romero, and Vardi [SIAM Journal on Computing, 2016] has shown decidability for the case $k=1$, and here we extend this result showing that decidability in fact holds for any arbitrary $k\geq 1$. The algorithm is in 2ExpSpace, but for the restricted but practically relevant case where all regular expressions of the query are of the form $a^*$ or $(a_1 + \dotsb + a_n)$ we show that the complexity of the problem drops to $\Pi^P_2$. We also investigate the related problem of approximating a UC2RPQ by queries of small tree-width. We exhibit an algorithm which, for any fixed number $k$, builds the maximal under-approximation of tree-width $k$ of a UC2RPQ. The maximal under-approximation of tree-width $k$ of a query $q$ is a query $q'$ of tree-width $k$ which is contained in $q$ in a maximal and unique way, that is, such that for every query $q''$ of tree-width $k$, if $q''$ is contained in $q$ then $q''$ is also contained in $q'$. Our approach is shown to be robust, in the sense that it allows also to test equivalence with queries of a given path-width, it also covers the previously known result for $k=1$, and it allows to test for equivalence of whether a (one-way) UCRPQ is equivalent to a UCRPQ of a given tree-width (or path-width).

We consider the problem of variational Bayesian inference in a latent variable model where a (possibly complex) observed stochastic process is governed by the solution of a latent stochastic differential equation (SDE). Motivated by the challenges that arise when trying to learn an (almost arbitrary) latent neural SDE from large-scale data, such as efficient gradient computation, we take a step back and study a specific subclass instead. In our case, the SDE evolves on a homogeneous latent space and is induced by stochastic dynamics of the corresponding (matrix) Lie group. In learning problems, SDEs on the unit $n$-sphere are arguably the most relevant incarnation of this setup. Notably, for variational inference, the sphere not only facilitates using a truly uninformative prior SDE, but we also obtain a particularly simple and intuitive expression for the Kullback-Leibler divergence between the approximate posterior and prior process in the evidence lower bound. Experiments demonstrate that a latent SDE of the proposed type can be learned efficiently by means of an existing one-step geometric Euler-Maruyama scheme. Despite restricting ourselves to a less diverse class of SDEs, we achieve competitive or even state-of-the-art performance on various time series interpolation and classification benchmarks.

The orthogonality dimension of a graph $G$ over $\mathbb{R}$ is the smallest integer $k$ for which one can assign a nonzero $k$-dimensional real vector to each vertex of $G$, such that every two adjacent vertices receive orthogonal vectors. We prove that for every sufficiently large integer $k$, it is $\mathsf{NP}$-hard to decide whether the orthogonality dimension of a given graph over $\mathbb{R}$ is at most $k$ or at least $2^{(1-o(1)) \cdot k/2}$. We further prove such hardness results for the orthogonality dimension over finite fields as well as for the closely related minrank parameter, which is motivated by the index coding problem in information theory. This in particular implies that it is $\mathsf{NP}$-hard to approximate these graph quantities to within any constant factor. Previously, the hardness of approximation was known to hold either assuming certain variants of the Unique Games Conjecture or for approximation factors smaller than $3/2$. The proofs involve the concept of line digraphs and bounds on their orthogonality dimension and on the minrank of their complement.

We consider the following general model of a sorting procedure: we fix a hereditary permutation class $\mathcal{C}$, which corresponds to the operations that the procedure is allowed to perform in a single step. The input of sorting is a permutation $\pi$ of the set $[n]=\{1,2,\dotsc,n\}$, i.e., a sequence where each element of $[n]$ appears once. In every step, the sorting procedure picks a permutation $\sigma$ of length $n$ from $\mathcal{C}$, and rearranges the current permutation of numbers by composing it with $\sigma$. The goal is to transform the input $\pi$ into the sorted sequence $1,2,\dotsc,n$ in as few steps as possible. This model of sorting captures not only classical sorting algorithms, like insertion sort or bubble sort, but also sorting by series of devices, like stacks or parallel queues, as well as sorting by block operations commonly considered, e.g., in the context of genome rearrangement. Our goal is to describe the possible asymptotic behavior of the worst-case number of steps needed when sorting with a hereditary permutation class. As the main result, we show that any hereditary permutation class $\mathcal{C}$ falls into one of five distinct categories. Disregarding the classes that cannot sort all permutations, the number of steps needed to sort any permutation of $[n]$ with $\mathcal{C}$ is either $\Theta(n^2)$, a function between $O(n)$ and $\Omega(\sqrt{n})$, a function betwee $O(\log^2 n)$ and $\Omega(\log n), or $1$, and for each of these cases we provide a structural characterization of the corresponding hereditary classes.

We combine dependent types with linear type systems that soundly and completely capture polynomial time computation. We explore two systems for capturing polynomial time: one system that disallows construction of iterable data, and one, based on the LFPL system of Martin Hofmann, that controls construction via a payment method. Both of these are extended to full dependent types via Quantitative Type Theory, allowing for arbitrary computation in types alongside guaranteed polynomial time computation in terms. We prove the soundness of the systems using a realisability technique due to Dal Lago and Hofmann. Our long-term goal is to combine the extensional reasoning of type theory with intensional reasoning about the resources intrinsically consumed by programs. This paper is a step along this path, which we hope will lead both to practical systems for reasoning about programs' resource usage, and to theoretical use as a form of synthetic computational complexity theory.

北京阿比特科技有限公司