亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Given an increasing sequence of integers $x_1,\ldots,x_n$ from a universe $\{0,\ldots,u-1\}$, the monotone minimal perfect hash function (MMPHF) for this sequence is a data structure that answers the following rank queries: $rank(x) = i$ if $x = x_i$, for $i\in \{1,\ldots,n\}$, and $rank(x)$ is arbitrary otherwise. Assadi, Farach-Colton, and Kuszmaul recently presented at SODA'23 a proof of the lower bound $\Omega(n \min\{\log\log\log u, \log n\})$ for the bits of space required by MMPHF, provided $u \ge n 2^{2^{\sqrt{\log\log n}}}$, which is tight since there is a data structure for MMPHF that attains this space bound (and answers the queries in $O(\log u)$ time). In this paper, we close the remaining gap by proving that, for $u \ge (1+\epsilon)n$, where $\epsilon > 0$ is any constant, the tight lower bound is $\Omega(n \min\{\log\log\log \frac{u}{n}, \log n\})$, which is also attainable; we observe that, for all reasonable cases when $n < u < (1+\epsilon)n$, known facts imply tight bounds, which virtually settles the problem. Along the way we substantially simplify the proof of Assadi et al. replacing a part of their heavy combinatorial machinery by trivial observations. However, an important part of the proof still remains complicated. This part of our paper repeats arguments of Assadi et al. and is not novel. Nevertheless, we include it, for completeness, offering a somewhat different perspective on these arguments.

相關內容

Given a matrix $A \in \mathbb{R}^{m\times d}$ with singular values $\sigma_1\geq \cdots \geq \sigma_d$, and a random matrix $G \in \mathbb{R}^{m\times d}$ with iid $N(0,T)$ entries for some $T>0$, we derive new bounds on the Frobenius distance between subspaces spanned by the top-$k$ (right) singular vectors of $A$ and $A+G$. This problem arises in numerous applications in statistics where a data matrix may be corrupted by Gaussian noise, and in the analysis of the Gaussian mechanism in differential privacy, where Gaussian noise is added to data to preserve private information. We show that, for matrices $A$ where the gaps in the top-$k$ singular values are roughly $\Omega(\sigma_k-\sigma_{k+1})$ the expected Frobenius distance between the subspaces is $\tilde{O}(\frac{\sqrt{d}}{\sigma_k-\sigma_{k+1}} \times \sqrt{T})$, improving on previous bounds by a factor of $\frac{\sqrt{m}}{\sqrt{d}} \sqrt{k}$. To obtain our bounds we view the perturbation to the singular vectors as a diffusion process -- the Dyson-Bessel process -- and use tools from stochastic calculus to track the evolution of the subspace spanned by the top-$k$ singular vectors.

Any continuous piecewise-linear function $F\colon \mathbb{R}^{n}\to \mathbb{R}$ can be represented as a linear combination of $\max$ functions of at most $n+1$ affine-linear functions. In our previous paper [``Representing piecewise linear functions by functions with small arity'', AAECC, 2023], we showed that this upper bound of $n+1$ arguments is tight. In the present paper, we extend this result by establishing a correspondence between the function $F$ and the minimal number of arguments that are needed in any such decomposition. We show that the tessellation of the input space $\mathbb{R}^{n}$ induced by the function $F$ has a direct connection to the number of arguments in the $\max$ functions.

We study modal separability for fixpoint formulae: given two mutually exclusive fixpoint formulae $\varphi,\varphi'$, decide whether there is a modal formula $\psi$ that separates them, that is, that satisfies $\varphi\models\psi\models\neg\varphi'$. This problem has applications for finding simple reasons for inconsistency. Our main contributions are tight complexity bounds for deciding modal separability and optimal ways to compute a separator if it exists. More precisely, it is EXPTIME-complete in general and PSPACE-complete over words. Separators can be computed in doubly exponential time in general and in exponential time over words, and this is optimal as well. The results for general structures transfer to arbitrary, finitely branching, and finite trees. The word case results hold for finite, infinite, and arbitrary words.

Earlier, we introduced Partial Quantifier Elimination (PQE). It is a $\mathit{generalization}$ of regular quantifier elimination where one can take a $\mathit{part}$ of the formula out of the scope of quantifiers. We apply PQE to CNF formulas of propositional logic with existential quantifiers. The appeal of PQE is that many problems like equivalence checking and model checking can be solved in terms of PQE and the latter can be very efficient. The main flaw of current PQE solvers is that they do not $\mathit{reuse}$ learned information. The problem here is that these PQE solvers are based on the notion of clause redundancy and the latter is a $\mathit{structural}$ rather than $\mathit{semantic}$ property. In this paper, we provide two important theoretical results that enable reusing the information learned by a PQE solver. Such reusing can dramatically boost the efficiency of PQE like conflict clause learning boosts SAT solving.

Given a multiset of $n$ items from $\mathcal{D}$, the \emph{profile reconstruction} problem is to estimate, for $t = 0, 1, \dots, n$, the fraction $\vec{f}[t]$ of items in $\mathcal{D}$ that appear exactly $t$ times. We consider differentially private profile estimation in a distributed, space-constrained setting where we wish to maintain an updatable, private sketch of the multiset that allows us to compute an approximation of $\vec{f} = (\vec{f}[0], \dots, \vec{f}[n])$. Using a histogram privatized using discrete Laplace noise, we show how to ``reverse'' the noise, using an approach of Dwork et al.~(ITCS '10). We show how to speed up their LP-based technique from polynomial time to $O(d + n \log n)$, where $d = |\mathcal{D}|$, and analyze the achievable error in the $\ell_1$, $\ell_2$ and $\ell_\infty$ norms. In all cases the dependency of the error on $d$ is $O( 1 / \sqrt{d})$ -- we give an information-theoretic lower bound showing that this dependence on $d$ is asymptotically optimal among all private, updatable sketches for the profile reconstruction problem with a high-probability error guarantee.

This paper presents an $O^{*}(1.42^{n})$ time algorithm for the Maximum Cut problem on split graphs, along with a subexponential time algorithm for its decision variant.

A preference matrix $M$ has an entry for each pair of candidates in an election whose value $p_{ij}$ represents the proportion of voters that prefer candidate $i$ over candidate $j$. The matrix is rationalizable if it is consistent with a set of voters whose preferences are total orders. A celebrated open problem asks for a concise characterization of rationalizable preference matrices. In this paper, we generalize this matrix rationalizability question and study when a preference matrix is consistent with a set of voters whose preferences are partial orders of width $\alpha$. The width (the maximum cardinality of an antichain) of the partial order is a natural measure of the rationality of a voter; indeed, a partial order of width $1$ is a total order. Our primary focus concerns the rationality number, the minimum width required to rationalize a preference matrix. We present two main results. The first concerns the class of half-integral preference matrices, where we show the key parameter required in evaluating the rationality number is the chromatic number of the undirected unanimity graph associated with the preference matrix $M$. The second concerns the class of integral preference matrices, where we show the key parameter now is the dichromatic number of the directed voting graph associated with $M$.

We provide an online learning algorithm that obtains regret $G\|w_\star\|\sqrt{T\log(\|w_\star\|G\sqrt{T})} + \|w_\star\|^2 + G^2$ on $G$-Lipschitz convex losses for any comparison point $w_\star$ without knowing either $G$ or $\|w_\star\|$. Importantly, this matches the optimal bound $G\|w_\star\|\sqrt{T}$ available with such knowledge (up to logarithmic factors), unless either $\|w_\star\|$ or $G$ is so large that even $G\|w_\star\|\sqrt{T}$ is roughly linear in $T$. Thus, it matches the optimal bound in all cases in which one can achieve sublinear regret, which arguably most "interesting" scenarios.

In Path Set Packing, the input is an undirected graph $G$, a collection $\calp$ of simple paths in $G$, and a positive integer $k$. The problem is to decide whether there exist $k$ edge-disjoint paths in $\calp$. We study the parameterized complexity of Path Set Packing with respect to both natural and structural parameters. We show that the problem is $W[1]$-hard with respect to vertex cover number, and $W[1]$-hard respect to pathwidth plus maximum degree plus solution size. These results answer an open question raised in COCOON 2018. On the positive side, we present an FPT algorithm parameterized by feedback vertex number plus maximum degree, and present an FPT algorithm parameterized by treewidth plus maximum degree plus maximum length of a path in $\calp$. These positive results complement the hardness of Path Set Packing with respect to any subset of the parameters used in the FPT algorithms. We also give a $4$-approximation algorithm for maximum path set packing problem which runs in FPT time when parameterized by feedback edge number.

Explicit, momentum-based dynamics that optimize functions defined on Lie groups can be constructed via variational optimization and momentum trivialization. Structure preserving time discretizations can then turn this dynamics into optimization algorithms. This article investigates two types of discretization, Lie Heavy-Ball, which is a known splitting scheme, and Lie NAG-SC, which is newly proposed. Their convergence rates are explicitly quantified under $L$-smoothness and local strong convexity assumptions. Lie NAG-SC provides acceleration over the momentumless case, i.e. Riemannian gradient descent, but Lie Heavy-Ball does not. When compared to existing accelerated optimizers for general manifolds, both Lie Heavy-Ball and Lie NAG-SC are computationally cheaper and easier to implement, thanks to their utilization of group structure. Only gradient oracle and exponential map are required, but not logarithm map or parallel transport which are computational costly.

北京阿比特科技有限公司