亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the average-case $k$-SUM problem, given $r$ integers chosen uniformly at random from $\{0,\dots,M-1\}$, the objective is to find a ``solution'' set of $k$ numbers that sum to $0$ modulo $M$. In the dense regime of $M \leq r^k$, where solutions exist with high probability, the complexity of these problems is well understood. Much less is known in the sparse regime of $M\gg r^k$, where solutions are unlikely to exist. In this work, we initiate the study of the sparse regime for $k$-SUM and its variant $k$-XOR, especially their planted versions, where a random solution is planted in a randomly generated instance and has to be recovered. We provide evidence for the hardness of these problems and suggest new applications to cryptography. Complexity. First we study the complexity of these problems in the sparse regime and show: - Conditional Lower Bounds. Assuming established conjectures about the hardness of average-case (non-planted) $k$-SUM/$k$-XOR when $M = r^k$, we provide non-trivial lower bounds on the running time of algorithms for planted $k$-SUM when $r^k\leq M\leq r^{2k}$. - Hardness Amplification. We show that for any $M \geq r^k$, if an algorithm running in time $T$ solves planted $k$-SUM/$k$-XOR with success probability $\Omega(1/\text{polylog}(r))$, then there is an algorithm running in time $\tilde{O}(T)$ that solves it with probability $(1-o(1))$. - New Reductions and Algorithms. We provide reductions for $k$-SUM/$k$-XOR from search to decision, as well as worst-case and average-case reductions to the Subset Sum problem from $k$-SUM, as well as a new algorithm for average-case $k$-XOR at low densities. Cryptography. We show that by additionally assuming mild hardness of $k$-XOR, we can construct Public Key Encryption (PKE) from a weaker variant of the Learning Parity with Noise (LPN) problem than was known before.

相關內容

We propose a Block Majorization Minimization method with Extrapolation (BMMe) for solving a class of multi-convex optimization problems. The extrapolation parameters of BMMe are updated using a novel adaptive update rule. By showing that block majorization minimization can be reformulated as a block mirror descent method, with the Bregman divergence adaptively updated at each iteration, we establish subsequential convergence for BMMe. We use this method to design efficient algorithms to tackle nonnegative matrix factorization problems with the $\beta$-divergences ($\beta$-NMF) for $\beta\in [1,2]$. These algorithms, which are multiplicative updates with extrapolation, benefit from our novel results that offer convergence guarantees. We also empirically illustrate the significant acceleration of BMMe for $\beta$-NMF through extensive experiments.

Given an $n$-vertex $m$-edge digraph $G = (V,E)$ and a set $S \subseteq V$, $|S| = n^{\sigma}$ (for some $0 < \sigma \le 1$) of designated sources, the $S \times V$-direachability problem is to compute for every $s \in S$, the set of all the vertices reachable from $s$ in $G$. Known naive algorithms for this problem either run a BFS/DFS separately from every source, and as a result require $O(m \cdot n^{\sigma})$ time, or compute the transitive closure of $G$ in $\tilde O(n^{\omega})$ time, where $\omega < 2.371552\ldots$ is the matrix multiplication exponent. Hence, the current state-of-the-art bound for the problem on graphs with $m = \Theta(n^{\mu})$ edges in $\tilde O(n^{\min \{\mu + \sigma, \omega \}})$. Our first contribution is an algorithm with running time $\tilde O(n^{1 + \tiny{\frac{2}{3}} \omega(\sigma)})$ for this problem, where $\omega(\sigma)$ is the rectangular matrix multiplication exponent. Using current state-of-the-art estimates on $\omega(\sigma)$, our exponent is better than $\min \{2 + \sigma, \omega \}$ for $\tilde \sigma \le \sigma \le 0.53$, where $1/3 < \tilde \sigma < 0.3336$ is a universal constant. Our second contribution is a sequence of algorithms $\mathcal A_0, \mathcal A_1, \mathcal A_2, \ldots$ for the $S \times V$-direachability problem. We argue that under a certain assumption that we introduce, for every $\tilde \sigma \le \sigma < 1$, there exists a sufficiently large index $k = k(\sigma)$ so that $\mathcal A_k$ improves upon the current state-of-the-art bounds for $S \times V$-direachability with $|S| = n^{\sigma}$, in the densest regime $\mu =2$. We show that to prove this assumption, it is sufficient to devise an algorithm that computes a rectangular max-min matrix product roughly as efficiently as ordinary $(+, \cdot)$ matrix product. Our algorithms heavily exploit recent constructions of directed shortcuts by Kogan and Parter.

This paper presents exact formulas for the probability distribution function (PDF) and moment generating function (MGF) of the sum-product of statistically independent but not necessarily identically distributed (i.n.i.d.) Nakagami-$m$ random variables (RVs) in terms of Meijer's G-function. Additionally, exact series representations are also derived for the sum of double-Nakagami RVs, providing useful insights on the trade-off between accuracy and computational cost. Simple asymptotic analytical expressions are provided to gain further insight into the derived formula, and the achievable diversity order is obtained. The suggested statistical properties are proved to be a highly useful tool for modeling parallel cascaded Nakagami-$m$ fading channels. The application of these new results is illustrated by deriving exact expressions and simple tight upper bounds for the outage probability (OP) and average symbol error rate (ASER) of several binary and multilevel modulation signals in intelligent reflecting surfaces (IRSs)-assisted communication systems operating over Nakagami-$m$ fading channels. It is demonstrated that the new asymptotic expression is highly accurate and can be extended to encompass a wider range of scenarios. To validate the theoretical frameworks and formulations, Monte-Carlo simulation results are presented. Additionally, supplementary simulations are provided to compare the derived results with two common types of approximations available in the literature, namely the central limit theorem (CLT) and gamma distribution.

A word $w=w_1\cdots w_n$ over the set of positive integers is a Motzkin word whenever $w_1=\texttt{1}$, $1\leq w_k\leq w_{k-1}+1$, and $w_{k-1}\neq w_{k}$ for $k=2, \dots, n$. It can be associated to a $n$-column Motzkin polyomino whose $i$-th column contains $w_i$ cells, and all columns are bottom-justified. We reveal bijective connections between Motzkin paths, restricted Catalan words, primitive {\L}ukasiewicz paths, and Motzkin polyominoes. Using the aforementioned bijections together with classical one-to-one correspondence with Dyck paths avoiding $UDU$s, we provide generating functions with respect to the length, area, semiperimeter, value of the last symbol, and number of interior points of Motzkin polyominoes. We give asymptotics and close expressions for the total area, total semiperimeter, sum of the last symbol values, and total number of interior points over all Motzkin polyominoes of a given length. We also present and prove an engaging trinomial relation concerning the number of cells lying at different levels and first terms of the expanded $(1+x+x^2)^n$.

A binary word is called $q$-decreasing, for $q>0$, if every of its length maximal factors of the form $0^a1^b$, $a>0$, satisfies $q \cdot a > b$. We bijectively link $q$-decreasing words with certain prefixes of the cutting sequence of the line $y=qx$. We show that the number of $q$-decreasing words of length $n$ grows as $\Phi(q)^{n} C_q $ for some constant $C_q$ which depends on $q$ but not on $n$. We demonstrate that $\Phi(1)$ is the golden ratio, $\Phi(2)$ is equal to the tribonacci constant, $\Phi(k)$ is $(k+1)$-bonacci constant. Furthermore, we prove that the function $\Phi(q)$ is strictly increasing, discontinuous at every positive rational point, exhibits a fractal structure related to the Stern--Brocot tree and Minkowski's question mark function.

This work concerns elementwise-transformations of spiked matrices: $Y_n = n^{-1/2} f( \sqrt{n} X_n + Z_n)$. Here, $f$ is a function applied elementwise, $X_n$ is a low-rank signal matrix, and $Z_n$ is white noise. We find that principal component analysis is powerful for recovering signal under highly nonlinear or discontinuous transformations. Specifically, in the high-dimensional setting where $Y_n$ is of size $n \times p$ with $n,p \rightarrow \infty$ and $p/n \rightarrow \gamma > 0$, we uncover a phase transition: for signal-to-noise ratios above a sharp threshold -- depending on $f$, the distribution of elements of $Z_n$, and the limiting aspect ratio $\gamma$ -- the principal components of $Y_n$ (partially) recover those of $X_n$. Below this threshold, the principal components of $Y_n$ are asymptotically orthogonal to the signal. In contrast, in the standard setting where $X_n + n^{-1/2}Z_n$ is observed directly, the analogous phase transition depends only on $\gamma$. A similar phenomenon occurs with $X_n$ square and symmetric and $Z_n$ a generalized Wigner matrix.

We are given a set $\mathcal{Z}=\{(R_1,s_1),\ldots, (R_n,s_n)\}$, where each $R_i$ is a \emph{range} in $\Re^d$, such as rectangle or ball, and $s_i \in [0,1]$ denotes its \emph{selectivity}. The goal is to compute a small-size \emph{discrete data distribution} $\mathcal{D}=\{(q_1,w_1),\ldots, (q_m,w_m)\}$, where $q_j\in \Re^d$ and $w_j\in [0,1]$ for each $1\leq j\leq m$, and $\sum_{1\leq j\leq m}w_j= 1$, such that $\mathcal{D}$ is the most \emph{consistent} with $\mathcal{Z}$, i.e., $\mathrm{err}_p(\mathcal{D},\mathcal{Z})=\frac{1}{n}\sum_{i=1}^n\! \lvert{s_i-\sum_{j=1}^m w_j\cdot 1(q_j\in R_i)}\rvert^p$ is minimized. In a database setting, $\mathcal{Z}$ corresponds to a workload of range queries over some table, together with their observed selectivities (i.e., fraction of tuples returned), and $\mathcal{D}$ can be used as compact model for approximating the data distribution within the table without accessing the underlying contents. In this paper, we obtain both upper and lower bounds for this problem. In particular, we show that the problem of finding the best data distribution from selectivity queries is $\mathsf{NP}$-complete. On the positive side, we describe a Monte Carlo algorithm that constructs, in time $O((n+\delta^{-d})\delta^{-2}\mathop{\mathrm{polylog}})$, a discrete distribution $\tilde{\mathcal{D}}$ of size $O(\delta^{-2})$, such that $\mathrm{err}_p(\tilde{\mathcal{D}},\mathcal{Z})\leq \min_{\mathcal{D}}\mathrm{err}_p(\mathcal{D},\mathcal{Z})+\delta$ (for $p=1,2,\infty$) where the minimum is taken over all discrete distributions. We also establish conditional lower bounds, which strongly indicate the infeasibility of relative approximations as well as removal of the exponential dependency on the dimension for additive approximations. This suggests that significant improvements to our algorithm are unlikely.

In this paper, for any fixed integer $q>2$, we construct $q$-ary codes correcting a burst of at most $t$ deletions with redundancy $\log n+8\log\log n+o(\log\log n)+\gamma_{q,t}$ bits and near-linear encoding/decoding complexity, where $n$ is the message length and $\gamma_{q,t}$ is a constant that only depends on $q$ and $t$. In previous works there are constructions of such codes with redundancy $\log n+O(\log q\log\log n)$ bits or $\log n+O(t^2\log\log n)+O(t\log q)$. The redundancy of our new construction is independent of $q$ and $t$ in the second term.

We consider the performance of a least-squares regression model, as judged by out-of-sample $R^2$. Shapley values give a fair attribution of the performance of a model to its input features, taking into account interdependencies between features. Evaluating the Shapley values exactly requires solving a number of regression problems that is exponential in the number of features, so a Monte Carlo-type approximation is typically used. We focus on the special case of least-squares regression models, where several tricks can be used to compute and evaluate regression models efficiently. These tricks give a substantial speed up, allowing many more Monte Carlo samples to be evaluated, achieving better accuracy. We refer to our method as least-squares Shapley performance attribution (LS-SPA), and describe our open-source implementation.

Standard multidimensional scaling takes as input a dissimilarity matrix of general term $\delta _{ij}$ which is a numerical value. In this paper we input $\delta _{ij}=[\underline{\delta _{ij}},\overline{\delta _{ij}}]$ where $\underline{\delta _{ij}}$ and $\overline{\delta _{ij}}$ are the lower bound and the upper bound of the ``dissimilarity'' between the stimulus/object $S_i$ and the stimulus/object $S_j$ respectively. As output instead of representing each stimulus/object on a factorial plane by a point, as in other multidimensional scaling methods, in the proposed method each stimulus/object is visualized by a rectangle, in order to represent dissimilarity variation. We generalize the classical scaling method looking for a method that produces results similar to those obtained by Tops Principal Components Analysis. Two examples are presented to illustrate the effectiveness of the proposed method.

北京阿比特科技有限公司