亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the trace reconstruction problem, one observes the output of passing a binary string $s \in \{0,1\}^n$ through a deletion channel $T$ times and wishes to recover $s$ from the resulting $T$ "traces." Most of the literature has focused on characterizing the hardness of this problem in terms of the number of traces $T$ needed for perfect reconstruction either in the worst case or in the average case (over input sequences $s$). In this paper, we propose an alternative, instance-based approach to the problem. We define the "Levenshtein difficulty" of a problem instance $(s,T)$ as the probability that the resulting traces do not provide enough information for correct recovery with full certainty. One can then try to characterize, for a specific $s$, how $T$ needs to scale in order for the Levenshtein difficulty to go to zero, and seek reconstruction algorithms that match this scaling for each $s$. For a class of binary strings with alternating long runs, we precisely characterize the scaling of $T$ for which the Levenshtein difficulty goes to zero. For this class, we also prove that a simple "Las Vegas algorithm" has an error probability that decays to zero with the same rate as that with which the Levenshtein difficulty tends to zero.

相關內容

Sponge hashing is a novel class of cryptographic hash algorithms which underlies the current international hash function standard SHA-3. In a nutshell, a sponge function takes as input a bit-stream of any length and processes it via a simple iterative procedure: it repeatedly feeds each block of the input into a so-called block function, and then produces a short digest which consists of a subset of the final output bits. While much is known about the post-quantum security of the sponge construction in the case when the block function is modeled as a random function or permutation, the case of invertible permutations, which more accurately models the construction underlying SHA-3, has so far remained a fundamental open problem. In this work, we make new progress towards overcoming this barrier and show several results. First, we prove the "double-sided zero-search" conjecture proposed by Unruh (eprint' 2021) and show that finding zero-pairs in a random $2n$-bit permutation requires at least $\Omega(2^{n/2})$ many queries -- and this is tight due to Grover's algorithm. At the core of our proof lies a novel "symmetrization argument" which uses insights from the theory of Young subgroups. Second, we consider more general variants of the double-sided search problem and show similar query lower bounds for them. As an application, we prove the quantum one-wayness of the single-round sponge with invertible permutations in the quantum random oracle model.

Let $\mathcal{P}$ be a simple polygon with $m$ vertices and let $P$ be a set of $n$ points inside $\mathcal{P}$. We prove that there exists, for any $\varepsilon>0$, a set $\mathcal{C} \subset P$ of size $O(1/\varepsilon^2)$ such that the following holds: for any query point $q$ inside the polygon $\mathcal{P}$, the geodesic distance from $q$ to its furthest neighbor in $\mathcal{C}$ is at least $1-\varepsilon$ times the geodesic distance to its further neighbor in $P$. Thus the set $\mathcal{C}$ can be used for answering $\varepsilon$-approximate furthest-neighbor queries with a data structure whose storage requirement is independent of the size of $P$. The coreset can be constructed in $O\left(\frac{1}{\varepsilon} \left( n\log(1/\varepsilon) + (n+m)\log(n+m)\right) \right)$ time.

The optimization of a black-box simulator over control parameters $\mathbf{x}$ arises in a myriad of scientific applications. In such applications, the simulator often takes the form $f(\mathbf{x},\boldsymbol{\theta})$, where $\boldsymbol{\theta}$ are parameters that are uncertain in practice. Robust optimization aims to optimize the objective $\mathbb{E}[f(\mathbf{x},\boldsymbol{\Theta})]$, where $\boldsymbol{\Theta} \sim \mathcal{P}$ is a random variable that models uncertainty on $\boldsymbol{\theta}$. For this, existing black-box methods typically employ a two-stage approach for selecting the next point $(\mathbf{x},\boldsymbol{\theta})$, where $\mathbf{x}$ and $\boldsymbol{\theta}$ are optimized separately via different acquisition functions. As such, these approaches do not employ a joint acquisition over $(\mathbf{x},\boldsymbol{\theta})$, and thus may fail to fully exploit control-to-noise interactions for effective robust optimization. To address this, we propose a new Bayesian optimization method called Targeted Variance Reduction (TVR). The TVR leverages a novel joint acquisition function over $(\mathbf{x},\boldsymbol{\theta})$, which targets variance reduction on the objective within the desired region of improvement. Under a Gaussian process surrogate on $f$, the TVR acquisition can be evaluated in closed form, and reveals an insightful exploration-exploitation-precision trade-off for robust black-box optimization. The TVR can further accommodate a broad class of non-Gaussian distributions on $\mathcal{P}$ via a careful integration of normalizing flows. We demonstrate the improved performance of TVR over the state-of-the-art in a suite of numerical experiments and an application to the robust design of automobile brake discs under operational uncertainty.

Consider a binary statistical hypothesis testing problem, where $n$ independent and identically distributed random variables $Z^n$ are either distributed according to the null hypothesis $P$ or the alternative hypothesis $Q$, and only $P$ is known. A well-known test that is suitable for this case is the so-called Hoeffding test, which accepts $P$ if the Kullback-Leibler (KL) divergence between the empirical distribution of $Z^n$ and $P$ is below some threshold. This work characterizes the first and second-order terms of the type-II error probability for a fixed type-I error probability for the Hoeffding test as well as for divergence tests, where the KL divergence is replaced by a general divergence. It is demonstrated that, irrespective of the divergence, divergence tests achieve the first-order term of the Neyman-Pearson test, which is the optimal test when both $P$ and $Q$ are known. In contrast, the second-order term of divergence tests is strictly worse than that of the Neyman-Pearson test. It is further demonstrated that divergence tests with an invariant divergence achieve the same second-order term as the Hoeffding test, but divergence tests with a non-invariant divergence may outperform the Hoeffding test for some alternative hypotheses $Q$. Potentially, this behavior could be exploited by a composite hypothesis test with partial knowledge of the alternative hypothesis $Q$ by tailoring the divergence of the divergence test to the set of possible alternative hypotheses.

We study the problem of maintaining a lightweight bounded-degree $(1+\varepsilon)$-spanner of a dynamic point set in a $d$-dimensional Euclidean space, where $\varepsilon>0$ and $d$ are arbitrary constants. In our fully-dynamic setting, points are allowed to be inserted as well as deleted, and our objective is to maintain a $(1+\varepsilon)$-spanner that has constant bounds on its maximum degree and its lightness (the ratio of its weight to that of the minimum spanning tree), while minimizing the recourse, which is the number of edges added or removed by each point insertion or deletion. We present a fully-dynamic algorithm that handles point insertion with amortized constant recourse and point deletion with amortized $O(\log\Delta)$ recourse, where $\Delta$ is the aspect ratio of the point set.

We study the complexity of approximating the number of answers to a small query $\varphi$ in a large database $\mathcal{D}$. We establish an exhaustive classification into tractable and intractable cases if $\varphi$ is a conjunctive query with disequalities and negations: $\bullet$ If there is a constant bound on the arity of $\varphi$, and if the randomised Exponential Time Hypothesis (rETH) holds, then the problem has a fixed-parameter tractable approximation scheme (FPTRAS) if and only if the treewidth of $\varphi$ is bounded. $\bullet$ If the arity is unbounded and we allow disequalities only, then the problem has an FPTRAS if and only if the adaptive width of $\varphi$ (a width measure strictly more general than treewidth) is bounded; the lower bound relies on the rETH as well. Additionally we show that our results cannot be strengthened to achieve a fully polynomial randomised approximation scheme (FPRAS): We observe that, unless $\mathrm{NP} =\mathrm{RP}$, there is no FPRAS even if the treewidth (and the adaptive width) is $1$. However, if there are neither disequalities nor negations, we prove the existence of an FPRAS for queries of bounded fractional hypertreewidth, strictly generalising the recently established FPRAS for conjunctive queries with bounded hypertreewidth due to Arenas, Croquevielle, Jayaram and Riveros (STOC 2021).

The stochastic block model (SBM) is a generalization of the Erd\H{o}s--R\'enyi model of random graphs that describes the interaction of a finite number of distinct communities. In sparse Erd\H{o}s--R\'enyi graphs, it is known that a linear-time algorithm of Karp and Sipser achieves near-optimal matching sizes asymptotically almost surely, giving a law-of-large numbers for the matching sizes of such graphs in terms of solutions to an ODE. We provide an extension of this analysis, identifying broad ranges of stochastic block model parameters for which the Karp--Sipser algorithm achieves near-optimal matching sizes, but demonstrating that it cannot perform optimally on general SBM instances. We also consider the problem of constructing a matching online, in which the vertices of one half of a bipartite stochastic block model arrive one-at-a-time, and must be matched as they arrive. We show that the competitive ratio lower bound of 0.837 found by Mastin and Jaillet for the Erd\H{o}s--R\'enyi case is tight whenever the expected degrees in all communities are equal. We propose several linear-time algorithms for online matching in the general stochastic block model, but prove that despite very good experimental performance, none of these achieve online asymptotic optimality.

We design a deterministic subexponential time algorithm that takes as input a multivariate polynomial $f$ computed by a constant-depth circuit over rational numbers, and outputs a list $L$ of circuits (of unbounded depth and possibly with division gates) that contains all irreducible factors of $f$ computable by constant-depth circuits. This list $L$ might also include circuits that are spurious: they either do not correspond to factors of $f$ or are not even well-defined, e.g. the input to a division gate is a sub-circuit that computes the identically zero polynomial. The key technical ingredient of our algorithm is a notion of the pseudo-resultant of $f$ and a factor $g$, which serves as a proxy for the resultant of $g$ and $f/g$, with the advantage that the circuit complexity of the pseudo-resultant is comparable to that of the circuit complexity of $f$ and $g$. This notion, which might be of independent interest, together with the recent results of Limaye, Srinivasan and Tavenas, helps us derandomize one key step of multivariate polynomial factorization algorithms - that of deterministically finding a good starting point for Newton Iteration for the case when the input polynomial as well as the irreducible factor of interest have small constant-depth circuits.

It is a longstanding conjecture that every simple drawing of a complete graph on $n \geq 3$ vertices contains a crossing-free Hamiltonian cycle. We strengthen this conjecture to "there exists a crossing-free Hamiltonian path between each pair of vertices" and show that this stronger conjecture holds for several classes of simple drawings, including strongly c-monotone drawings and cylindrical drawings. As a second main contribution, we give an overview on different classes of simple drawings and investigate inclusion relations between them up to weak isomorphism.

For positive integers $d$ and $p$ such that $d \ge p$, we obtain complete asymptotic expansions, for large $d$, of the normalizing constants for the matrix Bingham and matrix Langevin distributions on Stiefel manifolds. The accuracy of each truncated expansion is strictly increasing in $d$; also, for sufficiently large $d$, the accuracy is strictly increasing in $m$, the number of terms in the truncated expansion. We apply these results to obtain the rate of convergence of these asymptotic expansions if both $d, p \to \infty$. Using values of $d$ and $p$ arising in various data sets, we illustrate the rate of convergence of the truncated approximations as $d$ or $m$ increases. These results extend our recent work on asymptotic expansions for the normalizing constants of the high-dimensional Bingham distributions.

北京阿比特科技有限公司