亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Sorting is the task of ordering $n$ elements using pairwise comparisons. It is well known that $m=\Theta(n\log n)$ comparisons are both necessary and sufficient when the outcomes of the comparisons are observed with no noise. In this paper, we study the sorting problem when each comparison is incorrect with some fixed yet unknown probability $p$. Unlike the common approach in the literature which aims to minimize the number of pairwise comparisons $m$ to achieve a given desired error probability, we consider randomized algorithms with expected number of queries $\textsf{E}[M]$ and aim at characterizing the maximal sorting rate $\frac{n\log n}{\textsf{E}[M]}$ such that the ordering of the elements can be estimated with a vanishing error probability asymptotically. The maximal rate is referred to as the noisy sorting capacity. In this work, we derive upper and lower bounds on the noisy sorting capacity. The two lower bounds -- one for fixed-length algorithms and one for variable-length algorithms -- are established by combining the insertion sort algorithm with the well-known Burnashev--Zigangirov algorithm for channel coding with feedback. Compared with existing methods, the proposed algorithms are universal in the sense that they do not require the knowledge of $p$, while maintaining a strictly positive sorting rate. Moreover, we derive a general upper bound on the noisy sorting capacity, along with an upper bound on the maximal rate that can be achieved by sorting algorithms that are based on insertion sort.

相關內容

Given a Boolean function $f:\{0,1\}^n\to\{0,1\}$, the goal in the usual query model is to compute $f$ on an unknown input $x \in \{0,1\}^n$ while minimizing the number of queries to $x$. One can also consider a "distinguishing" problem denoted by $f_{\mathsf{sab}}$: given an input $x \in f^{-1}(0)$ and an input $y \in f^{-1}(1)$, either all differing locations are replaced by a $*$, or all differing locations are replaced by $\dagger$, and an algorithm's goal is to identify which of these is the case while minimizing the number of queries. Ben-David and Kothari [ToC'18] introduced the notion of randomized sabotage complexity of a Boolean function to be the zero-error randomized query complexity of $f_{\mathsf{sab}}$. A natural follow-up question is to understand $\mathsf{Q}(f_{\mathsf{sab}})$, the quantum query complexity of $f_{\mathsf{sab}}$. In this paper, we initiate a systematic study of this. The following are our main results: $\bullet\;\;$ If we have additional query access to $x$ and $y$, then $\mathsf{Q}(f_{\mathsf{sab}})=O(\min\{\mathsf{Q}(f),\sqrt{n}\})$. $\bullet\;\;$ If an algorithm is also required to output a differing index of a 0-input and a 1-input, then $\mathsf{Q}(f_{\mathsf{sab}})=O(\min\{\mathsf{Q}(f)^{1.5},\sqrt{n}\})$. $\bullet\;\;$ $\mathsf{Q}(f_{\mathsf{sab}}) = \Omega(\sqrt{\mathsf{fbs}(f)})$, where $\mathsf{fbs}(f)$ denotes the fractional block sensitivity of $f$. By known results, along with the results in the previous bullets, this implies that $\mathsf{Q}(f_{\mathsf{sab}})$ is polynomially related to $\mathsf{Q}(f)$. $\bullet\;\;$ The bound above is easily seen to be tight for standard functions such as And, Or, Majority and Parity. We show that when $f$ is the Indexing function, $\mathsf{Q}(f_{\mathsf{sab}})=\Theta(\mathsf{fbs}(f))$, ruling out the possibility that $\mathsf{Q}(f_{\mathsf{sab}})=\Theta(\sqrt{\mathsf{fbs}(f)})$ for all $f$.

Annealed Sequential Monte Carlo (SMC) samplers are special cases of SMC samplers where the sequence of distributions can be embedded in a smooth path of distributions. Using this underlying path of distributions and a performance model based on the variance of the normalisation constant estimator, we systematically study dense schedule and large particle limits. From our theory and adaptive methods emerges a notion of global barrier capturing the inherent complexity of normalisation constant approximation under our performance model. We then turn the resulting approximations into surrogate objective functions of algorithm performance, and use them for methodology development. We obtain novel adaptive methodologies, Sequential SMC (SSMC) and Sequential AIS (SAIS) samplers, which address practical difficulties inherent in previous adaptive SMC methods. First, our SSMC algorithms are predictable: they produce a sequence of increasingly precise estimates at deterministic and known times. Second, SAIS, a special case of SSMC, enables schedule adaptation at a memory cost constant in the number of particles and require much less communication. Finally, these characteristics make SAIS highly efficient on GPUs. We develop an open-source, high-performance GPU implementation based on our methodology and demonstrate up to a hundred-fold speed improvement compared to state-of-the-art adaptive AIS methods.

We give a simple algorithm for the dynamic approximate All-Pairs Shortest Paths (APSP) problem. Given a graph $G = (V, E, l)$ with polynomially bounded edge lengths, our data structure processes $|E|$ edge insertions and deletions in total time $|E|^{1 + o(1)}$ and provides query access to $|E|^{o(1)}$-approximate distances in time $\tilde{O}(1)$ per query. We produce a data structure that mimics Thorup-Zwick distance oracles [TZ'05], but is dynamic and deterministic. Our algorithm selects a small number of pivot vertices. Then, for every other vertex, it reduces distance computation to maintaining distances to a small neighborhood around that vertex and to the nearest pivot. We maintain distances between pivots efficiently by representing them in a smaller graph and recursing. We construct these smaller graphs by (a) reducing vertex count using the dynamic distance-preserving core graphs of Kyng-Meierhans-Probst Gutenberg [KMPG'24] in a black-box manner and (b) reducing edge-count using a dynamic spanner akin to Chen-Kyng-Liu-Meierhans-Probst Gutenberg [CKL+'24]. Our dynamic spanner internally uses an APSP data structure. Choosing a large enough size reduction factor in the first step allows us to simultaneously bootstrap our spanner and a dynamic APSP data structure. Notably, our approach does not need expander graphs, an otherwise ubiquitous tool in derandomization.

In streaming PCA, we see a stream of vectors $x_1, \dotsc, x_n \in \mathbb{R}^d$ and want to estimate the top eigenvector of their covariance matrix. This is easier if the spectral ratio $R = \lambda_1 / \lambda_2$ is large. We ask: how large does $R$ need to be to solve streaming PCA in $\widetilde{O}(d)$ space? Existing algorithms require $R = \widetilde{\Omega}(d)$. We show: (1) For all mergeable summaries, $R = \widetilde{\Omega}(\sqrt{d})$ is necessary. (2) In the insertion-only model, a variant of Oja's algorithm gets $o(1)$ error for $R = O(\log n \log d)$. (3) No algorithm with $o(d^2)$ space gets $o(1)$ error for $R = O(1)$. Our analysis is the first application of Oja's algorithm to adversarial streams. It is also the first algorithm for adversarial streaming PCA that is designed for a spectral, rather than Frobenius, bound on the tail; and the bound it needs is exponentially better than is possible by adapting a Frobenius guarantee.

We introduce the novel class $(E_\alpha)_{\alpha \in [-\infty,1)}$ of reverse map projection embeddings, each one defining a unique new method of encoding classical data into quantum states. Inspired by well-known map projections from the unit sphere onto its tangent planes, used in practice in cartography, these embeddings address the common drawback of the amplitude embedding method, wherein scalar multiples of data points are identified and information about the norm of data is lost. We show how reverse map projections can be utilised as equivariant embeddings for quantum machine learning. Using these methods, we can leverage symmetries in classical datasets to significantly strengthen performance on quantum machine learning tasks. Finally, we select four values of $\alpha$ with which to perform a simple classification task, taking $E_\alpha$ as the embedding and experimenting with both equivariant and non-equivariant setups. We compare their results alongside those of standard amplitude embedding.

Nonparametric contextual bandit is an important model of sequential decision making problems. Under $\alpha$-Tsybakov margin condition, existing research has established a regret bound of $\tilde{O}\left(T^{1-\frac{\alpha+1}{d+2}}\right)$ for bounded supports. However, the optimal regret with unbounded contexts has not been analyzed. The challenge of solving contextual bandit problems with unbounded support is to achieve both exploration-exploitation tradeoff and bias-variance tradeoff simultaneously. In this paper, we solve the nonparametric contextual bandit problem with unbounded contexts. We propose two nearest neighbor methods combined with UCB exploration. The first method uses a fixed $k$. Our analysis shows that this method achieves minimax optimal regret under a weak margin condition and relatively light-tailed context distributions. The second method uses adaptive $k$. By a proper data-driven selection of $k$, this method achieves an expected regret of $\tilde{O}\left(T^{1-\frac{(\alpha+1)\beta}{\alpha+(d+2)\beta}}+T^{1-\beta}\right)$, in which $\beta$ is a parameter describing the tail strength. This bound matches the minimax lower bound up to logarithm factors, indicating that the second method is approximately optimal.

The orthogonality dimension of a graph over $\mathbb{R}$ is the smallest integer $d$ for which one can assign to every vertex a nonzero vector in $\mathbb{R}^d$ such that every two adjacent vertices receive orthogonal vectors. For an integer $d$, the $d$-Ortho-Dim$_\mathbb{R}$ problem asks to decide whether the orthogonality dimension of a given graph over $\mathbb{R}$ is at most $d$. We prove that for every integer $d \geq 3$, the $d$-Ortho-Dim$_\mathbb{R}$ problem parameterized by the vertex cover number $k$ admits a kernel with $O(k^{d-1})$ vertices and bit-size $O(k^{d-1} \cdot \log k)$. We complement this result by a nearly matching lower bound, showing that for any $\varepsilon > 0$, the problem admits no kernel of bit-size $O(k^{d-1-\varepsilon})$ unless $\mathsf{NP} \subseteq \mathsf{coNP/poly}$. We further study the kernelizability of orthogonality dimension problems in additional settings, including over general fields and under various structural parameterizations.

The problem of column subset selection asks for a subset of columns from an input matrix such that the matrix can be reconstructed as accurately as possible within the span of the selected columns. A natural extension is to consider a setting where the matrix rows are partitioned into two groups, and the goal is to choose a subset of columns that minimizes the maximum reconstruction error of both groups, relative to their respective best rank-k approximation. Extending the known results of column subset selection to this fair setting is not straightforward: in certain scenarios it is unavoidable to choose columns separately for each group, resulting in double the expected column count. We propose a deterministic leverage-score sampling strategy for the fair setting and show that sampling a column subset of minimum size becomes NP-hard in the presence of two groups. Despite these negative results, we give an approximation algorithm that guarantees a solution within 1.5 times the optimal solution size. We also present practical heuristic algorithms based on rank-revealing QR factorization. Finally, we validate our methods through an extensive set of experiments using real-world data.

Khosravi, Drnov\v{s}ek and Moslehian [\textit{Filomat, 2012}] derived Buzano inequality for Hilbert C*-modules. Using this inequality we derive Deutsch entropic uncertainty principle for Hilbert C*-modules over commutative unital C*-algebras.

Selecting $k$ out of $m$ items based on the preferences of $n$ heterogeneous agents is a widely studied problem in algorithmic game theory. If agents have approval preferences over individual items and harmonic utility functions over bundles -- an agent receives $\sum_{j=1}^t\frac{1}{j}$ utility if $t$ of her approved items are selected -- then welfare optimisation is captured by a voting rule known as Proportional Approval Voting (PAV). PAV also satisfies demanding fairness axioms. However, finding a winning set of items under PAV is NP-hard. In search of a tractable method with strong fairness guarantees, a bounded local search version of PAV was proposed by Aziz et al. It proceeds by starting with an arbitrary size-$k$ set $W$ and, at each step, checking if there is a pair of candidates $a\in W$, $b\not\in W$ such that swapping $a$ and $b$ increases the total welfare by at least $\varepsilon$; if yes, it performs the swap. Aziz et al.~show that setting $\varepsilon=\frac{n}{k^2}$ ensures both the desired fairness guarantees and polynomial running time. However, they leave it open whether the algorithm converges in polynomial time if $\varepsilon$ is very small (in particular, if we do not stop until there are no welfare-improving swaps). We resolve this open question, by showing that if $\varepsilon$ can be arbitrarily small, the running time of this algorithm may be super-polynomial. Specifically, we prove a lower bound of~$\Omega(k^{\log k})$ if improvements are chosen lexicographically. To complement our lower bound, we provide an empirical comparison of two variants of local search -- better-response and best-response -- on several real-life data sets and a variety of synthetic data sets. Our experiments indicate that, empirically, better response exhibits faster running time than best response.

北京阿比特科技有限公司