亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a $O(1)$-approximate fully dynamic algorithm for the $k$-median and $k$-means problems on metric spaces with amortized update time $\tilde O(k)$ and worst-case query time $\tilde O(k^2)$. We complement our theoretical analysis with the first in-depth experimental study for the dynamic $k$-median problem on general metrics, focusing on comparing our dynamic algorithm to the current state-of-the-art by Henzinger and Kale [ESA'20]. Finally, we also provide a lower bound for dynamic $k$-median which shows that any $O(1)$-approximate algorithm with $\tilde O(\text{poly}(k))$ query time must have $\tilde \Omega(k)$ amortized update time, even in the incremental setting.

相關內容

Formalized $1$-category theory forms a core component of various libraries of mathematical proofs. However, more sophisticated results in fields from algebraic topology to theoretical physics, where objects have "higher structure," rely on infinite-dimensional categories in place of $1$-dimensional categories, and $\infty$-category theory has thusfar proved unamenable to computer formalization. Using a new proof assistant called Rzk, which is designed to support Riehl-Shulman's simplicial extension of homotopy type theory for synthetic $\infty$-category theory, we provide the first formalizations of results from $\infty$-category theory. This includes in particular a formalization of the Yoneda lemma, often regarded as the fundamental theorem of category theory, a theorem which roughly states that an object of a given category is determined by its relationship to all of the other objects of the category. A key feature of our framework is that, thanks to the synthetic theory, many constructions are automatically natural or functorial. We plan to use Rzk to formalize further results from $\infty$-category theory, such as the theory of limits and colimits and adjunctions.

We study the accuracy of reconstruction of a family of functions $f_\epsilon(x)$, $x\in\mathbb R^2$, $\epsilon\to0$, from their discrete Radon transform data sampled with step size $O(\epsilon)$. For each $\epsilon>0$ sufficiently small, the function $f_\epsilon$ has a jump across a rough boundary $\mathcal S_\epsilon$, which is modeled by an $O(\epsilon)$-size perturbation of a smooth boundary $\mathcal S$. The function $H_0$, which describes the perturbation, is assumed to be of bounded variation. Let $f_\epsilon^{\text{rec}}$ denote the reconstruction, which is computed by interpolating discrete data and substituting it into a continuous inversion formula. We prove that $(f_\epsilon^{\text{rec}}-K_\epsilon*f_\epsilon)(x_0+\epsilon\check x)=O(\epsilon^{1/2}\ln(1/\epsilon))$, where $x_0\in\mathcal S$ and $K_\epsilon$ is an easily computable kernel.

For a prime $p$ and a positive integer $m$, let $\mathbb{F}_{p^m}$ be the finite field of characteristic $p$, and $\mathfrak{R}_l:=\mathbb{F}_{p^m}[v]/\langle v^l-v\rangle$ be a non-chain ring. In this paper, we study the $(\sigma,\delta)$-cyclic codes over $\mathfrak{R}_l$. Further, we study the application of these codes in finding DNA codes. Towards this, we first define a Gray map to find classical codes over $\mathbb{F}_{p^m}$ using codes over the ring $\mathfrak{R}_l$. Later, we find the conditions for a code to be reversible and a DNA code using $(\sigma, \delta)$-cyclic code. Finally, this algebraic method provides many classical and DNA codes of better parameters.

Shape is a powerful tool to understand point sets. A formal notion of shape is given by $\alpha$-shapes, which generalize the convex hull and provide adjustable level of detail. Many real-world point sets have an inherent temporal property as natural processes often happen over time, like lightning strikes during thunderstorms or moving animal swarms. To explore such point sets, where each point is associated with one timestamp, interactive applications may utilize $\alpha$-shapes and allow the user to specify different time windows and $\alpha$-values. We show how to compute the temporal $\alpha$-shape $\alpha_T$, a minimal description of all $\alpha$-shapes over all time windows, in output-sensitive linear time. We also give complexity bounds on $|\alpha_T|$. We use $\alpha_T$ to interactively visualize $\alpha$-shapes of user-specified time windows without having to constantly compute requested $\alpha$-shapes. Experimental results suggest that our approach outperforms an existing approach by a factor of at least $\sim$52 and that the description we compute has reasonable size in practice. The basis for our algorithm is an existing algorithm which computes all Delaunay triangles over all time windows using $\mathcal{O}(1)$ time per triangle. Our approach generalizes to higher dimensions with the same runtime for fixed $d$.

In this paper, we study the algebraic structure of $(\sigma,\delta)$-polycyclic codes as submodules in the quotient module $S/Sf$, where $S=R[x,\sigma,\delta]$ is the Ore extension, $f\in S$, and $R$ is a finite but not necessarily commutative ring. We establish that the Euclidean duals of $(\sigma,\delta)$-polycyclic codes are $(\sigma,\delta)$-sequential codes. By using $(\sigma,\delta)$-Pseudo Linear Transformation (PLT), we define the annihilator dual of $(\sigma,\delta)$-polycyclic codes. Then, we demonstrate that the annihilator duals of $(\sigma,\delta)$-polycyclic codes maintain their $(\sigma,\delta)$-polycyclic nature. Furthermore, we classify when two $(\sigma,\delta)$-polycyclic codes are Hamming isometrical equivalent. By employing Wedderburn polynomials, we introduce simple-root $(\sigma,\delta)$-polycyclic codes. Subsequently, we define the $(\sigma, \delta)$-Mattson-Solomon transform for this class of codes and we address the problem of decomposing these codes by using the properties of Wedderburn polynomials.

A 2018 conjecture of Brewster, McGuinness, Moore, and Noel asserts that for $k \ge 3$, if a graph has chromatic number greater than $k$, then it contains at least as many cycles of length $0 \bmod k$ as the complete graph on $k+1$ vertices. Our main result confirms this in the $k=3$ case by showing every $4$-critical graph contains at least $4$ cycles of length $0 \bmod 3$, and that $K_4$ is the unique such graph achieving the minimum. We make progress on the general conjecture as well, showing that $(k+1)$-critical graphs with minimum degree $k$ have at least as many cycles of length $0\bmod r$ as $K_{k+1}$, provided $k+1 \ne 0 \bmod r$. We also show that $K_{k+1}$ uniquely minimizes the number of cycles of length $1\bmod k$ among all $(k+1)$-critical graphs, strengthening a recent result of Moore and West and extending it to the $k=3$ case.

Subsurface storage of CO$_2$ is an important means to mitigate climate change, and to investigate the fate of CO$_2$ over several decades in vast reservoirs, numerical simulation based on realistic models is essential. Faults and other complex geological structures introduce modeling challenges as their effects on storage operations are uncertain due to limited data. In this work, we present a computational framework for forward propagation of uncertainty, including stochastic upscaling and copula representation of flow functions for a CO$_2$ storage site using the Vette fault zone in the Smeaheia formation in the North Sea as a test case. The upscaling method leads to a reduction of the number of stochastic dimensions and the cost of evaluating the reservoir model. A viable model that represents the upscaled data needs to capture dependencies between variables, and allow sampling. Copulas provide representation of dependent multidimensional random variables and a good fit to data, allow fast sampling, and coupling to the forward propagation method via independent uniform random variables. The non-stationary correlation within some of the upscaled flow function are accurately captured by a data-driven transformation model. The uncertainty in upscaled flow functions and other parameters are propagated to uncertain leakage estimates using numerical reservoir simulation of a two-phase system. The expectations of leakage are estimated by an adaptive stratified sampling technique, where samples are sequentially concentrated to regions of the parameter space to greedily maximize variance reduction. We demonstrate cost reduction compared to standard Monte Carlo of one or two orders of magnitude for simpler test cases with only fault and reservoir layer permeabilities assumed uncertain, and factors 2--8 cost reduction for stochastic multi-phase flow properties and more complex stochastic models.

0-dimensional persistent homology is known, from a computational point of view, as the easy case. Indeed, given a list of $n$ edges in non-decreasing order of filtration value, one only needs a union-find data structure to keep track of the connected components and we get the persistence diagram in time $O(n\alpha(n))$. The running time is thus usually dominated by sorting the edges in $\Theta(n\log(n))$. A little-known fact is that, in the particularly simple case of studying the sublevel sets of a piecewise-linear function on $\mathbb{R}$ or $\mathbb{S}^1$, persistence can actually be computed in linear time. This note presents a simple algorithm that achieves this complexity and an extension to image persistence. An implementation is available in Gudhi.

We say that a (multi)graph $G = (V,E)$ has geometric thickness $t$ if there exists a straight-line drawing $\varphi : V \rightarrow \mathbb{R}^2$ and a $t$-coloring of its edges where no two edges sharing a point in their relative interior have the same color. The Geometric Thickness problem asks whether a given multigraph has geometric thickness at most $t$. This problem was shown to be NP-hard for $t=2$ [Durocher, Gethner, and Mondal, CG 2016]. In this paper, we settle the computational complexity of Geometric Thickness by showing that it is $\exists \mathbb{R}$-complete already for thickness $57$. Moreover, our reduction shows that the problem is $\exists \mathbb{R}$-complete for $8280$-planar graphs, where a graph is $k$-planar if it admits a topological drawing with at most $k$ crossings per edge. In the course of our paper, we answer previous questions on the geometric thickness and on other related problems, in particular, that simultaneous graph embeddings of $58$ edge-disjoint graphs and pseudo-segment stretchability with chromatic number $57$ are $\exists \mathbb{R}$-complete.

We introduce a numerical scheme that approximates solutions to linear PDE's by minimizing a residual in the $W^{-1,p'}(\Omega)$ norm with exponents $p> 2$. The resulting problem is solved by regularized Kacanov iterations, allowing to compute the solution to the non-linear minimization problem even for large exponents $p\gg 2$. Such large exponents remedy instabilities of finite element methods for problems like convection-dominated diffusion.

北京阿比特科技有限公司