亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A dictionary data structure maintains a set of at most $n$ keys from the universe $[U]$ under key insertions and deletions, such that given a query $x \in [U]$, it returns if $x$ is in the set. Some variants also store values associated to the keys such that given a query $x$, the value associated to $x$ is returned when $x$ is in the set. This fundamental data structure problem has been studied for six decades since the introduction of hash tables in 1953. A hash table occupies $O(n\log U)$ bits of space with constant time per operation in expectation. There has been a vast literature on improving its time and space usage. The state-of-the-art dictionary by Bender, Farach-Colton, Kuszmaul, Kuszmaul and Liu [BFCK+22] has space consumption close to the information-theoretic optimum, using a total of \[ \log\binom{U}{n}+O(n\log^{(k)} n) \] bits, while supporting all operations in $O(k)$ time, for any parameter $k \leq \log^* n$. The term $O(\log^{(k)} n) = O(\underbrace{\log\cdots\log}_k n)$ is referred to as the wasted bits per key. In this paper, we prove a matching cell-probe lower bound: For $U=n^{1+\Theta(1)}$, any dictionary with $O(\log^{(k)} n)$ wasted bits per key must have expected operational time $\Omega(k)$, in the cell-probe model with word-size $w=\Theta(\log U)$. Furthermore, if a dictionary stores values of $\Theta(\log U)$ bits, we show that regardless of the query time, it must have $\Omega(k)$ expected update time. It is worth noting that this is the first cell-probe lower bound on the trade-off between space and update time for general data structures.

相關內容

We study temporal analogues of the Unrestricted Vertex Separator problem from the static world. An $(s,z)$-temporal separator is a set of vertices whose removal disconnects vertex $s$ from vertex $z$ for every time step in a temporal graph. The $(s,z)$-Temporal Separator problem asks to find the minimum size of an $(s,z)$-temporal separator for the given temporal graph. We introduce a generalization of this problem called the $(s,z,t)$-Temporal Separator problem, where the goal is to find a smallest subset of vertices whose removal eliminates all temporal paths from $s$ to $z$ which take less than $t$ time steps. Let $\tau$ denote the number of time steps over which the temporal graph is defined (we consider discrete time steps). We characterize the set of parameters $\tau$ and $t$ when the problem is $\mathcal{NP}$-hard and when it is polynomial time solvable. Then we present a $\tau$-approximation algorithm for the $(s,z)$-Temporal Separator problem and convert it to a $\tau^2$-approximation algorithm for the $(s,z,t)$-Temporal Separator problem. We also present an inapproximability lower bound of $\Omega(\ln(n) + \ln(\tau))$ for the $(s,z,t)$-Temporal Separator problem assuming that $\mathcal{NP}\not\subset\mbox{\sc Dtime}(n^{\log\log n})$. Then we consider three special families of graphs: (1) graphs of branchwidth at most $2$, (2) graphs $G$ such that the removal of $s$ and $z$ leaves a tree, and (3) graphs of bounded pathwidth. We present polynomial-time algorithms to find a minimum $(s,z,t)$-temporal separator for (1) and (2). As for (3), we show a polynomial-time reduction from the Discrete Segment Covering problem with bounded-length segments to the $(s,z,t)$-Temporal Separator problem where the temporal graph has bounded pathwidth.

We can define the error distribution as the limiting distribution of the error between the solution $Y$ of a given stochastic differential equation (SDE) and its numerical approximation $\hat{Y}^{(m)}$, weighted by the convergence rate between the two. A goal when studying the error distribution is to provide a way of determination for error distributions for any SDE and numerical scheme that converge to the exact solution. By dividing the error into a main term and a remainder term in a particular way, the author shows that the remainder term can be negligible compared to the main term under certain suitable conditions. Under these conditions, deriving the error distribution reduces to deriving the limiting distribution of the main term. Even if the dimension is one, there are unsolved problems about the asymptotic behavior of the error when the SDE has a drift term and $0<H\leq 1/3$, but our result in the one-dimensional case can be adapted to any Hurst exponent. The main idea of the proof is to define a stochastic process $Y^{m, \rho}$ with the parameter $\rho$ interpolating between $Y$ and $\hat{Y}^{(m)}$ and to estimate the asymptotic expansion for it. Using this estimate, we determine the error distribution of the ($k$)-Milstein scheme and of the Crank-Nicholson scheme in unsolved cases.

Consider the following parameterized counting variation of the classic subset sum problem, which arises notably in the context of higher homotopy groups of topological spaces: Let $\mathbf{v} \in \mathbb{Q}^d$ be a rational vector, $(T_{1}, T_{2} \ldots T_{m})$ a list of $d \times d$ rational matrices, $S \in \mathbb{Q}^{h \times d}$ a rational matrix not necessarily square and $k$ a parameter. The goal is to compute the number of ways one can choose $k$ matrices $T_{i_1}, T_{i_2}, \ldots, T_{i_k}$ from the list such that $ST_{i_k} \cdots T_{i_1}\mathbf{v} = \mathbf{0} \in \mathbb{Q}^h$. In this paper, we show that this problem is $\# W[2]$-hard for parameter $k$. %This strengthens a result of Matou\v{s}ek, who showed $\# W[1]$-hardness of that problem. As a consequence, computing the $k$-th homotopy group of a $d$-dimensional topological space for $d > 3$ is $\# W[2]$-hard for parameter $k$. We also discuss a decision version of the problem and its several modifications for which we show $W[1]/W[2]$-hardness. This is in contrast to the parameterized $k$-sum problem, which is only $W[1]$-hard (Abboud-Lewi-Williams, ESA'14). In addition, we show that the decision version of the problem without parameter is an undecidable problem, and we give a fixed-parameter tractable algorithm for matrices of bounded size over finite fields, parameterized the matrix dimensions and the order of the field.

We present the first $\varepsilon$-differentially private, computationally efficient algorithm that estimates the means of product distributions over $\{0,1\}^d$ accurately in total-variation distance, whilst attaining the optimal sample complexity to within polylogarithmic factors. The prior work had either solved this problem efficiently and optimally under weaker notions of privacy, or had solved it optimally while having exponential running times.

Topological data analysis (TDA) allows us to explore the topological features of a dataset. Among topological features, lower dimensional ones have recently drawn the attention of practitioners in mathematics and statistics due to their potential to aid the discovery of low dimensional structure in a data set. However, lower dimensional features are usually challenging to detect based on finite samples and using TDA methods that ignore the probabilistic mechanism that generates the data. In this paper, lower dimensional topological features occurring as zero-density regions of density functions are introduced and thoroughly investigated. Specifically, we consider sequences of coverings for the support of a density function in which the coverings are comprised of balls with shrinking radii. We show that, when these coverings satisfy certain sufficient conditions as the sample size goes to infinity, we can detect lower dimensional, zero-density regions with increasingly higher probability while guarding against false detection. We supplement the theoretical developments with the discussion of simulated experiments that elucidate the behavior of the methodology for different choices of the tuning parameters that govern the construction of the covering sequences and characterize the asymptotic results.

Aspect-based sentiment Analysis (ABSA) delves into understanding sentiments specific to distinct elements within textual content. It aims to analyze user-generated reviews to determine a) the target entity being reviewed, b) the high-level aspect to which it belongs, c) the sentiment words used to express the opinion, and d) the sentiment expressed toward the targets and the aspects. While various benchmark datasets have fostered advancements in ABSA, they often come with domain limitations and data granularity challenges. Addressing these, we introduce the OATS dataset, which encompasses three fresh domains and consists of 20,000 sentence-level quadruples and 13,000 review-level tuples. Our initiative seeks to bridge specific observed gaps: the recurrent focus on familiar domains like restaurants and laptops, limited data for intricate quadruple extraction tasks, and an occasional oversight of the synergy between sentence and review-level sentiments. Moreover, to elucidate OATS's potential and shed light on various ABSA subtasks that OATS can solve, we conducted in-domain and cross-domain experiments, establishing initial baselines. We hope the OATS dataset augments current resources, paving the way for an encompassing exploration of ABSA.

We consider the problem of extending a function $f^{}_P$ defined on a subset $P$ of an arbitrary set $X$ to $X$ strictly monotonically with respect to a preorder $\succcurlyeq$ defined on $X$, without imposing continuity constraints. We show that whenever $\succcurlyeq$ has a utility representation, $f^{}_P$ is extendable if and only if it is gap-safe increasing. A class of extensions involving an arbitrary utility representation of $\succcurlyeq$ is proposed and investigated. Connections to related topological results are discussed. The condition of extendability and the form of the extension are simplified when $P$ is a Pareto set.

We propose an original approach to investigate the linearity of Gray codes obtained from $\mathbb{Z}_{2^L}$-additive codes by introducing two related binary codes: the associated and concatenated. Once they are defined, one could perform a straightforward analysis of the Schur product between their codewords and determine the linearity of the respective Gray code. This work expands on earlier contributions from the literature, where the linearity was established with respect to the kernel of a code and/or operations on $\mathbb{Z}_{2^L}$. The $\mathbb{Z}_{2^L}$-additive codes we apply the Gray map and check the linearity are the well-known Hadamard, simplex, MacDonald, Kerdock, and Preparata codes. We also present a family of Reed-Muller codes that yield to linear Gray codes and perform a computational verification of our proposed method applied to other $\mathbb{Z}_{2^L}$-additive codes.

The recent development of online static map element (a.k.a. HD Map) construction algorithms has raised a vast demand for data with ground truth annotations. However, available public datasets currently cannot provide high-quality training data regarding consistency and accuracy. To this end, we present CAMA: a vision-centric approach for Consistent and Accurate Map Annotation. Without LiDAR inputs, our proposed framework can still generate high-quality 3D annotations of static map elements. Specifically, the annotation can achieve high reprojection accuracy across all surrounding cameras and is spatial-temporal consistent across the whole sequence. We apply our proposed framework to the popular nuScenes dataset to provide efficient and highly accurate annotations. Compared with the original nuScenes static map element, models trained with annotations from CAMA achieve lower reprojection errors (e.g., 4.73 vs. 8.03 pixels).

Cold-start problems are long-standing challenges for practical recommendations. Most existing recommendation algorithms rely on extensive observed data and are brittle to recommendation scenarios with few interactions. This paper addresses such problems using few-shot learning and meta learning. Our approach is based on the insight that having a good generalization from a few examples relies on both a generic model initialization and an effective strategy for adapting this model to newly arising tasks. To accomplish this, we combine the scenario-specific learning with a model-agnostic sequential meta-learning and unify them into an integrated end-to-end framework, namely Scenario-specific Sequential Meta learner (or s^2 meta). By doing so, our meta-learner produces a generic initial model through aggregating contextual information from a variety of prediction tasks while effectively adapting to specific tasks by leveraging learning-to-learn knowledge. Extensive experiments on various real-world datasets demonstrate that our proposed model can achieve significant gains over the state-of-the-arts for cold-start problems in online recommendation. Deployment is at the Guess You Like session, the front page of the Mobile Taobao.

北京阿比特科技有限公司