亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The \emph{Fast Gaussian Transform} (FGT) enables subquadratic-time multiplication of an $n\times n$ Gaussian kernel matrix $\mathsf{K}_{i,j}= \exp ( - \| x_i - x_j \|_2^2 ) $ with an arbitrary vector $h \in \mathbb{R}^n$, where $x_1,\dots, x_n \in \mathbb{R}^d$ are a set of \emph{fixed} source points. This kernel plays a central role in machine learning and random feature maps. Nevertheless, in most modern data analysis applications, datasets are dynamically changing (yet often have low rank), and recomputing the FGT from scratch in (kernel-based) algorithms incurs a major computational overhead ($\gtrsim n$ time for a single source update $\in \mathbb{R}^d$). These applications motivate a \emph{dynamic FGT} algorithm, which maintains a dynamic set of sources under \emph{kernel-density estimation} (KDE) queries in \emph{sublinear time} while retaining Mat-Vec multiplication accuracy and speed. Assuming the dynamic data-points $x_i$ lie in a (possibly changing) $k$-dimensional subspace ($k\leq d$), our main result is an efficient dynamic FGT algorithm, supporting the following operations in $\log^{O(k)}(n/\varepsilon)$ time: (1) Adding or deleting a source point, and (2) Estimating the ``kernel-density'' of a query point with respect to sources with $\varepsilon$ additive accuracy. The core of the algorithm is a dynamic data structure for maintaining the \emph{projected} ``interaction rank'' between source and target boxes, decoupled into finite truncation of Taylor and Hermite expansions.

相關內容

FAST:Conference on File and Storage Technologies。 Explanation:文件和存儲技術會議。 Publisher:USENIX。 SIT:

We define and investigate the Fr\'{e}chet edit distance problem. Given two polygonal curves $\pi$ and $\sigma$ and a threshhold value $\delta>0$, we seek the minimum number of edits to $\sigma$ such that the Fr\'{e}chet distance between the edited $\sigma$ and $\pi$ is at most $\delta$. For the edit operations we consider three cases, namely, deletion of vertices, insertion of vertices, or both. For this basic problem we consider a number of variants. Specifically, we provide polynomial time algorithms for both discrete and continuous Fr\'{e}chet edit distance variants, as well as hardness results for weak Fr\'{e}chet edit distance variants.

We study the $k$-th nearest neighbor distance function from a finite point-set in $\mathbb{R}^d$. We provide a Morse theoretic framework to analyze the sub-level set topology. In particular, we present a simple combinatorial-geometric characterization for critical points and their indices, along with detailed information about the possible changes in homology at the critical levels. We conclude by computing the expected number of critical points for a homogeneous Poisson process. Our results deliver significant insights and tools for the analysis of persistent homology in order-$k$ Delaunay mosaics, and random $k$-fold coverage.

For an n-vertex directed graph $G = (V,E)$, a $\beta$-\emph{shortcut set} $H$ is a set of additional edges $H \subseteq V \times V$ such that $G \cup H$ has the same transitive closure as $G$, and for every pair $u,v \in V$, there is a $uv$-path in $G \cup H$ with at most $\beta$ edges. A natural generalization of shortcut sets to distances is a $(\beta,\epsilon)$-\emph{hopset} $H \subseteq V \times V$, where the requirement is that $H$ and $G \cup H$ have the same shortest-path distances, and for every $u,v \in V$, there is a $(1+\epsilon)$-approximate shortest path in $G \cup H$ with at most $\beta$ edges. There is a large literature on the tradeoff between the size of a shortcut set / hopset and the value of $\beta$. We highlight the most natural point on this tradeoff: what is the minimum value of $\beta$, such that for any graph $G$, there exists a $\beta$-shortcut set (or a $(\beta,\epsilon)$-hopset) with $O(n)$ edges? Not only is this a natural structural question in its own right, but shortcuts sets / hopsets form the core of many distributed, parallel, and dynamic algorithms for reachability / shortest paths. Until very recently the best known upper bound was a folklore construction showing $\beta = O(n^{1/2})$, but in a breakthrough result Kogan and Parter [SODA 2022] improve this to $\beta = \tilde{O}(n^{1/3})$ for shortcut sets and $\tilde{O}(n^{2/5})$ for hopsets. Our result is to close the gap between shortcut sets and hopsets. That is, we show that for any graph $G$ and any fixed $\epsilon$ there is a $(\tilde{O}(n^{1/3}),\epsilon)$ hopset with $O(n)$ edges. More generally, we achieve a smooth tradeoff between hopset size and $\beta$ which exactly matches the tradeoff of Kogan and Parter for shortcut sets (up to polylog factors). Using a very recent black-box reduction of Kogan and Parter, our new hopset implies improved bounds for approximate distance preservers.

If $G$ is a group, we say a subset $S$ of $G$ is product-free if the equation $xy=z$ has no solutions with $x,y,z \in S$. For $D \in \mathbb{N}$, a group $G$ is said to be $D$-quasirandom if the minimal dimension of a nontrivial complex irreducible representation of $G$ is at least $D$. Gowers showed that in a $D$-quasirandom finite group $G$, the maximal size of a product-free set is at most $|G|/D^{1/3}$. This disproved a longstanding conjecture of Babai and S\'os from 1985. For the special unitary group, $G=SU(n)$, Gowers observed that his argument yields an upper bound of $n^{-1/3}$ on the measure of a measurable product-free subset. In this paper, we improve Gowers' upper bound to $\exp(-cn^{1/3})$, where $c>0$ is an absolute constant. In fact, we establish something stronger, namely, product-mixing for measurable subsets of $SU(n)$ with measure at least $\exp(-cn^{1/3})$; for this product-mixing result, the $n^{1/3}$ in the exponent is sharp. Our approach involves introducing novel hypercontractive inequalities, which imply that the non-Abelian Fourier spectrum of the indicator function of a small set concentrates on high-dimensional irreducible representations. Our hypercontractive inequalities are obtained via methods from representation theory, harmonic analysis, random matrix theory and differential geometry. We generalize our hypercontractive inequalities from $SU(n)$ to an arbitrary $D$-quasirandom compact connected Lie group for $D$ at least an absolute constant, thereby extending our results on product-free sets to such groups. We also demonstrate various other applications of our inequalities to geometry (viz., non-Abelian Brunn-Minkowski type inequalities), mixing times, and the theory of growth in compact Lie groups.

Given a set of $n$ sites from $\mathbb{R}^d$, each having some positive weight factor, the Multiplicatively Weighted Voronoi Diagram is a subdivision of space that associates each cell to the site whose weighted Euclidean distance is minimal for all points in the cell. We give novel approximation algorithms that output a cube-based subdivision such that the weighted distance of a point with respect to the associated site is at most $(1+\varepsilon)$ times the minimum weighted distance, for any fixed parameter $\varepsilon \in (0,1)$. The diagram size is $O_d(n \log(1/\varepsilon)/\varepsilon^{d-1})$ and the construction time is within an $O_D(\log(n)/\varepsilon^{(d+5)/2})$-factor of the size bound. We also prove a matching lower bound for the size, showing that the proposed method is the first to achieve \emph{optimal size}, up to $\Theta(1)^d$-factors. In particular, the obscure $\log(1/\varepsilon)$ factor is unavoidable. As a by-product, we obtain a factor $d^{O(d)}$ improvement in size for the unweighted case and $O(d \log(n) + d^2 \log(1/\varepsilon))$ point-location time in the subdivision, improving the known query bound by one $d$-factor. The key ingredients of our approximation algorithms are the study of convex regions that we call cores, an adaptive refinement algorithm to obtain optimal size, and a novel notion of \emph{bisector coresets}, which may be of independent interest. In particular, we show that coresets with $O_d(1/\varepsilon^{(d+3)/2})$ worst-case size can be computed in near-linear time.

We study the sample complexity of learning an $\epsilon$-optimal policy in an average-reward Markov decision process (MDP) under a generative model. For weakly communicating MDPs, we establish the complexity bound $\tilde{O}(SA\frac{H}{\epsilon^2})$, where $H$ is the span of the bias function of the optimal policy and $SA$ is the cardinality of the state-action space. Our result is the first that is minimax optimal (up to log factors) in all parameters $S,A,H$ and $\epsilon$, improving on existing work that either assumes uniformly bounded mixing times for all policies or has suboptimal dependence on the parameters. We further investigate sample complexity in general (non-weakly-communicating) average-reward MDPs. We argue a new transient time parameter $B$ is necessary, establish an $\tilde{O}(SA\frac{B+H}{\epsilon^2})$ complexity bound, and prove a matching (up to log factors) minimax lower bound. Both results are based on reducing the average-reward MDP to a discounted MDP, which requires new ideas in the general setting. To establish the optimality of this reduction, we develop improved bounds for $\gamma$-discounted MDPs, showing that $\tilde{\Omega}\left(SA\frac{H}{(1-\gamma)^2\epsilon^2}\right)$ samples suffice to learn an $\epsilon$-optimal policy in weakly communicating MDPs under the regime that $\gamma\geq 1-1/H$, and $\tilde{\Omega}\left(SA\frac{B+H}{(1-\gamma)^2\epsilon^2}\right)$ samples suffice in general MDPs when $\gamma\geq 1-\frac{1}{B+H}$. Both these results circumvent the well-known lower bound of $\tilde{\Omega}\left(SA\frac{1}{(1-\gamma)^3\epsilon^2}\right)$ for arbitrary $\gamma$-discounted MDPs. Our analysis develops upper bounds on certain instance-dependent variance parameters in terms of the span and transient time parameters. The weakly communicating bounds are tighter than those based on the mixing time or diameter of the MDP and may be of broader use.

The time-dependent Gross-Pitaevksii equation (GPE) is a nonlinear Schr\"odinger equation which is used in quantum physics to model the dynamics of Bose-Einstein condensates. In this work we consider numerical approximations of the GPE based on a multiscale approach known as the localized orthogonal decomposition. Combined with an energy preserving time integrator one derives a method which is of high order in space and time under mild regularity assumptions. In previous work, the method has been shown to be numerically very efficient compared to first order Lagrange FEM. In this paper, we further investigate the performance of the method and compare it with higher order Lagrange FEM. For rough problems we observe that the novel method performs very efficient and retains its high order, while the classical methods can only compete well for smooth problems.

Given a simple undirected graph $G$, a quasi-clique is a subgraph of $G$ whose density is at least $\gamma$ $(0 < \gamma \leq 1)$. Finding a maximum quasi-clique has been addressed from two different perspectives: $i)$ maximizing vertex cardinality for a given edge density; and $ii)$ maximizing edge density for a given vertex cardinality. However, when no a priori preference information about cardinality and density is available, a more natural approach is to consider the problem from a multiobjective perspective. We introduce the Multiobjective Quasi-clique Problem (MOQC), which aims to find a quasi-clique by simultaneously maximizing both vertex cardinality and edge density. To efficiently address this problem, we explore the relationship among MOQC, its single-objective counterpart problems, and a biobjective optimization problem, along with several properties of the MOQC problem and quasi-cliques. We propose a baseline approach using $\varepsilon$-constraint scalarization and introduce a Two-phase strategy, which applies a dichotomic search based on weighted sum scalarization in the first phase and an $\varepsilon$-constraint methodology in the second phase. Additionally, we present a Three-phase strategy that combines the dichotomic search used in Two-phase with a vertex-degree-based local search employing novel sufficient conditions to assess quasi-clique efficiency, followed by an $\varepsilon$-constraint in a final stage. Experimental results on real-world sparse graphs indicate that the integrated use of dichotomic search and local search, together with mechanisms to assess quasi-clique efficiency, makes the Three-phase strategy an effective approach for solving the MOQC problem in terms of running time and ability to produce new efficient quasi-cliques.

An \emph{eight-partition} of a finite set of points (respectively, of a continuous mass distribution) in $\mathbb{R}^3$ consists of three planes that divide the space into $8$ octants, such that each open octant contains at most $1/8$ of the points (respectively, of the mass). In 1966, Hadwiger showed that any mass distribution in $\mathbb{R}^3$ admits an eight-partition; moreover, one can prescribe the normal direction of one of the three planes. The analogous result for finite point sets follows by a standard limit argument. We prove the following variant of this result: Any mass distribution (or point set) in $\mathbb{R}^3$ admits an eight-partition for which the intersection of two of the planes is a line with a prescribed direction. Moreover, we present an efficient algorithm for calculating an eight-partition of a set of $n$ points in~$\mathbb{R}^3$ (with prescribed normal direction of one of the planes) in time $O^{*}(n^{5/2})$.

In stochastic simulation, input uncertainty refers to the propagation of the statistical noise in calibrating input models to impact output accuracy, in addition to the Monte Carlo simulation noise. The vast majority of the input uncertainty literature focuses on estimating target output quantities that are real-valued. However, outputs of simulation models are random and real-valued targets essentially serve only as summary statistics. To provide a more holistic assessment, we study the input uncertainty problem from a distributional view, namely we construct confidence bands for the entire output distribution function. Our approach utilizes a novel test statistic whose asymptotic consists of the supremum of the sum of a Brownian bridge and a suitable mean-zero Gaussian process, which generalizes the Kolmogorov-Smirnov statistic to account for input uncertainty. Regarding implementation, we also demonstrate how to use subsampling to efficiently estimate the covariance function of the Gaussian process, thereby leading to an implementable estimation of the quantile of the test statistic and a statistically valid confidence band. Numerical results demonstrate how our new confidence bands provide valid coverage for output distributions under input uncertainty that is not achievable by conventional approaches.

北京阿比特科技有限公司