亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A $(1+\varepsilon)\textit{-stretch tree cover}$ of a metric space is a collection of trees, where every pair of points has a $(1+\varepsilon)$-stretch path in one of the trees. The celebrated $\textit{Dumbbell Theorem}$ [Arya et~al. STOC'95] states that any set of $n$ points in $d$-dimensional Euclidean space admits a $(1+\varepsilon)$-stretch tree cover with $O_d(\varepsilon^{-d} \cdot \log(1/\varepsilon))$ trees, where the $O_d$ notation suppresses terms that depend solely on the dimension~$d$. The running time of their construction is $O_d(n \log n \cdot \frac{\log(1/\varepsilon)}{\varepsilon^{d}} + n \cdot \varepsilon^{-2d})$. Since the same point may occur in multiple levels of the tree, the $\textit{maximum degree}$ of a point in the tree cover may be as large as $\Omega(\log \Phi)$, where $\Phi$ is the aspect ratio of the input point set. In this work we present a $(1+\varepsilon)$-stretch tree cover with $O_d(\varepsilon^{-d+1} \cdot \log(1/\varepsilon))$ trees, which is optimal (up to the $\log(1/\varepsilon)$ factor). Moreover, the maximum degree of points in any tree is an $\textit{absolute constant}$ for any $d$. As a direct corollary, we obtain an optimal {routing scheme} in low-dimensional Euclidean spaces. We also present a $(1+\varepsilon)$-stretch $\textit{Steiner}$ tree cover (that may use Steiner points) with $O_d(\varepsilon^{(-d+1)/{2}} \cdot \log(1/\varepsilon))$ trees, which too is optimal. The running time of our two constructions is linear in the number of edges in the respective tree covers, ignoring an additive $O_d(n \log n)$ term; this improves over the running time underlying the Dumbbell Theorem.

相關內容

We propose a Riemannian gradient descent with the Poincar\'e metric to compute the order-$\alpha$ Augustin information, a widely used quantity for characterizing exponential error behaviors in information theory. We prove that the algorithm converges to the optimum at a rate of $\mathcal{O}(1 / T)$. As far as we know, this is the first algorithm with a non-asymptotic optimization error guarantee for all positive orders. Numerical experimental results demonstrate the empirical efficiency of the algorithm. Our result is based on a novel hybrid analysis of Riemannian gradient descent for functions that are geodesically convex in a Riemannian metric and geodesically smooth in another.

A coding lattice $\Lambda_c$ and a shaping lattice $\Lambda_s$ forms a nested lattice code $\mathcal{C}$ if $\Lambda_s \subseteq \Lambda_c$. Under some conditions, $\mathcal{C}$ is a finite cyclic group formed by rectangular encoding. This paper presents the conditions for the existence of such $\mathcal{C}$ and provides some designs. These designs correspond to solutions to linear Diophantine equations so that a cyclic lattice code $\mathcal C$ of arbitrary codebook size $M$ can possess group isomorphism, which is an essential property for a nested lattice code to be applied in physical layer network relaying techniques such as compute and forward.

Toward desirable saliency prediction, the types and numbers of inputs for a salient object detection (SOD) algorithm may dynamically change in many real-life applications. However, existing SOD algorithms are mainly designed or trained for one particular type of inputs, failing to be generalized to other types of inputs. Consequentially, more types of SOD algorithms need to be prepared in advance for handling different types of inputs, raising huge hardware and research costs. Differently, in this paper, we propose a new type of SOD task, termed Arbitrary Modality SOD (AM SOD). The most prominent characteristics of AM SOD are that the modality types and modality numbers will be arbitrary or dynamically changed. The former means that the inputs to the AM SOD algorithm may be arbitrary modalities such as RGB, depths, or even any combination of them. While, the latter indicates that the inputs may have arbitrary modality numbers as the input type is changed, e.g. single-modality RGB image, dual-modality RGB-Depth (RGB-D) images or triple-modality RGB-Depth-Thermal (RGB-D-T) images. Accordingly, a preliminary solution to the above challenges, \i.e. a modality switch network (MSN), is proposed in this paper. In particular, a modality switch feature extractor (MSFE) is first designed to extract discriminative features from each modality effectively by introducing some modality indicators, which will generate some weights for modality switching. Subsequently, a dynamic fusion module (DFM) is proposed to adaptively fuse features from a variable number of modalities based on a novel Transformer structure. Finally, a new dataset, named AM-XD, is constructed to facilitate research on AM SOD. Extensive experiments demonstrate that our AM SOD method can effectively cope with changes in the type and number of input modalities for robust salient object detection.

An $(m,n,R)$-de Bruijn covering array (dBCA) is a doubly periodic $M \times N$ array over an alphabet of size $q$ such that the set of all its $m \times n$ windows form a covering code with radius $R$. An upper bound of the smallest array area of an $(m,n,R)$-dBCA is provided using a probabilistic technique which is similar to the one that was used for an upper bound on the length of a de Bruijn covering sequence. A folding technique to construct a dBCA from a de Bruijn covering sequence or de Bruijn covering sequences code is presented. Several new constructions that yield shorter de Bruijn covering sequences and $(m,n,R)$-dBCAs with smaller areas are also provided. These constructions are mainly based on sequences derived from cyclic codes, self-dual sequences, primitive polynomials, an interleaving technique, folding, and mutual shifts of sequences with the same covering radius. Finally, constructions of de Bruijn covering sequences codes are also discussed.

This study focuses on statistical inference for compound models of the form $X=\xi_1+\ldots+\xi_N$, where $N$ is a random variable denoting the count of summands, which are independent and identically distributed (i.i.d.) random variables $\xi_1, \xi_2, \ldots$. The paper addresses the problem of reconstructing the distribution of $\xi$ from observed samples of $X$'s distribution, a process referred to as decompounding, with the assumption that $N$'s distribution is known. This work diverges from the conventional scope by not limiting $N$'s distribution to the Poisson type, thus embracing a broader context. We propose a nonparametric estimate for the density of $\xi$, derive its rates of convergence and prove that these rates are minimax optimal for suitable classes of distributions for $\xi$ and $N$. Finally, we illustrate the numerical performance of the algorithm on simulated examples.

We define the relative fractional independence number of a graph $G$ with respect to another graph $H$, as $$\alpha^*(G|H)=\max_{W}\frac{\alpha(G\boxtimes W)}{\alpha(H\boxtimes W)},$$ where the maximum is taken over all graphs $W$, $G\boxtimes W$ is the strong product of $G$ and $W$, and $\alpha$ denotes the independence number. We give a non-trivial linear program to compute $\alpha^*(G|H)$ and discuss some of its properties. We show that $\alpha^*(G|H)\geq \frac{X(G)}{X(H)} \geq \frac{1}{\alpha^*(H|G)},$ where $X(G)$ can be the independence number, the zero-error Shannon capacity, the fractional independence number, the Lov\'{a}sz number, or the Schrijver's or Szegedy's variants of the Lov\'{a}sz number of a graph $G$. This inequality is the first explicit non-trivial upper bound on the ratio of the invariants of two arbitrary graphs, as mentioned earlier, which can also be used to obtain upper or lower bounds for these invariants. As explicit applications, we present new upper bounds for the ratio of the zero-error Shannon capacity of two Cayley graphs and compute new lower bounds on the Shannon capacity of certain Johnson graphs (yielding the exact value of their Haemers number). Moreover, we show that $\alpha^*(G|H)$ can be used to present a stronger version of the well-known No-Homomorphism Lemma.

In the Online List Labeling problem, a set of $n \leq N$ elements from a totally ordered universe must be stored in sorted order in an array with $m=N+\lceil\varepsilon N \rceil$ slots, where $\varepsilon \in (0,1]$ is constant, while an adversary chooses elements that must be inserted and deleted from the set. We devise a skip-list based algorithm for maintaining order against an oblivious adversary and show that the expected amortized number of writes is $O(\varepsilon^{-1}\log (n) \operatorname{poly}(\log \log n))$ per update.

In the broadcasting problem on trees, a $\{0,1\}$-message originating in an unknown node is passed along the tree with a certain error probability $q$. The goal is to estimate the original message without knowing the order in which the nodes were informed. A variation of the problem is considering this broadcasting process on a randomly growing tree, which Addario-Berry et al. have investigated for uniform and linear preferential attachment recursive trees. We extend their studies of the majority estimator to the entire group of very simple increasing trees as well as shape exchangeable trees using the connection to inhomogeneous random walks and other stochastic processes with memory effects such as P\'olya Urns.

We study the problem of chasing positive bodies in $\ell_1$: given a sequence of bodies $K_{t}=\{x^{t}\in\mathbb{R}_{+}^{n}\mid C^{t}x^{t}\geq 1,P^{t}x^{t}\leq 1\}$ revealed online, where $C^{t}$ and $P^{t}$ are nonnegative matrices, the goal is to (approximately) maintain a point $x_t \in K_t$ such that $\sum_t \|x_t - x_{t-1}\|_1$ is minimized. This captures the fully-dynamic low-recourse variant of any problem that can be expressed as a mixed packing-covering linear program and thus also the fractional version of many central problems in dynamic algorithms such as set cover, load balancing, hyperedge orientation, minimum spanning tree, and matching. We give an $O(\log d)$-competitive algorithm for this problem, where $d$ is the maximum row sparsity of any matrix $C^t$. This bypasses and improves exponentially over the lower bound of $\sqrt{n}$ known for general convex bodies. Our algorithm is based on iterated information projections, and, in contrast to general convex body chasing algorithms, is entirely memoryless. We also show how to round our solution dynamically to obtain the first fully dynamic algorithms with competitive recourse for all the stated problems above; i.e. their recourse is less than the recourse of every other algorithm on every update sequence, up to polylogarithmic factors. This is a significantly stronger notion than the notion of absolute recourse in the dynamic algorithms literature.

We consider the $\textit{Similarity Sketching}$ problem: Given a universe $[u] = \{0,\ldots, u-1\}$ we want a random function $S$ mapping subsets $A\subseteq [u]$ into vectors $S(A)$ of size $t$, such that the Jaccard similarity $J(A,B) = |A\cap B|/|A\cup B|$ between sets $A$ and $B$ is preserved. More precisely, define $X_i = [S(A)[i] = S(B)[i]]$ and $X = \sum_{i\in [t]} X_i$. We want $E[X_i]=J(A,B)$, and we want $X$ to be strongly concentrated around $E[X] = t \cdot J(A,B)$ (i.e. Chernoff-style bounds). This is a fundamental problem which has found numerous applications in data mining, large-scale classification, computer vision, similarity search, etc. via the classic MinHash algorithm. The vectors $S(A)$ are also called $\textit{sketches}$. Strong concentration is critical, for often we want to sketch many sets $B_1,\ldots,B_n$ so that we later, for a query set $A$, can find (one of) the most similar $B_i$. It is then critical that no $B_i$ looks much more similar to $A$ due to errors in the sketch. The seminal $t\times\textit{MinHash}$ algorithm uses $t$ random hash functions $h_1,\ldots, h_t$, and stores $\left ( \min_{a\in A} h_1(A),\ldots, \min_{a\in A} h_t(A) \right )$ as the sketch of $A$. The main drawback of MinHash is, however, its $O(t\cdot |A|)$ running time, and finding a sketch with similar properties and faster running time has been the subject of several papers. (continued...)

北京阿比特科技有限公司