亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose a new interpolation-based error decoding algorithm for $(n,k)$ Reed-Solomon (RS) codes over a finite field of size $q$, where $n=q-1$ is the length and $k$ is the dimension. In particular, we employ the fast Fourier transform (FFT) together with properties of a circulant matrix associated with the error interpolation polynomial and some known results from elimination theory in the decoding process. The asymptotic computational complexity of the proposed algorithm for correcting any $t \leq \lfloor \frac{n-k}{2} \rfloor$ errors in an $(n,k)$ RS code is of order $\mathcal{O}(t\log^2 t)$ and $\mathcal{O}(n\log^2 n \log\log n)$ over FFT-friendly and arbitrary finite fields, respectively, achieving the best currently known asymptotic decoding complexity, proposed for the same set of parameters.

相關內容

The combination of Terahertz (THz) and massive multiple-input multiple-output (MIMO) is promising to meet the increasing data rate demand of future wireless communication systems thanks to the huge bandwidth and spatial degrees of freedom. However, unique channel features such as the near-field beam split effect make channel estimation particularly challenging in THz massive MIMO systems. On one hand, adopting the conventional angular domain transformation dictionary designed for low-frequency far-field channels will result in degraded channel sparsity and destroyed sparsity structure in the transformed domain. On the other hand, most existing compressive sensing-based channel estimation algorithms cannot achieve high performance and low complexity simultaneously. To alleviate these issues, in this paper, we first adopt frequency-dependent near-field dictionaries to maintain good channel sparsity and sparsity structure in the transformed domain under the near-field beam split effect. Then, a deep unfolding-based wideband THz massive MIMO channel estimation algorithm is proposed. In each iteration of the unitary approximate message passing-sparse Bayesian learning algorithm, the optimal update rule is learned by a deep neural network (DNN), whose structure is customized to effectively exploit the inherent channel patterns. Furthermore, a mixed training method based on novel designs of the DNN structure and the loss function is developed to effectively train data from different system configurations. Simulation results validate the superiority of the proposed algorithm in terms of performance, complexity, and robustness.

We solve a problem of Dujmovi\'c and Wood (2007) by showing that a complete convex geometric graph on $n$ vertices cannot be decomposed into fewer than $n-1$ star-forests, each consisting of noncrossing edges. This bound is clearly tight. We also discuss similar questions for abstract graphs.

Shannon proved that almost all Boolean functions require a circuit of size $\Theta(2^n/n)$. We prove a quantum analog of this classical result. Unlike in the classical case the number of quantum circuits of any fixed size that we allow is uncountably infinite. Our main tool is a classical result in real algebraic geometry bounding the number of realizable sign conditions of any finite set of real polynomials in many variables.

We propose an accelerated block proximal linear framework with adaptive momentum (ABPL$^+$) for nonconvex and nonsmooth optimization. We analyze the potential causes of the extrapolation step failing in some algorithms, and resolve this issue by enhancing the comparison process that evaluates the trade-off between the proximal gradient step and the linear extrapolation step in our algorithm. Furthermore, we extends our algorithm to any scenario involving updating block variables with positive integers, allowing each cycle to randomly shuffle the update order of the variable blocks. Additionally, under mild assumptions, we prove that ABPL$^+$ can monotonically decrease the function value without strictly restricting the extrapolation parameters and step size, demonstrates the viability and effectiveness of updating these blocks in a random order, and we also more obviously and intuitively demonstrate that the derivative set of the sequence generated by our algorithm is a critical point set. Moreover, we demonstrate the global convergence as well as the linear and sublinear convergence rates of our algorithm by utilizing the Kurdyka-Lojasiewicz (K{\L}) condition. To enhance the effectiveness and flexibility of our algorithm, we also expand the study to the imprecise version of our algorithm and construct an adaptive extrapolation parameter strategy, which improving its overall performance. We apply our algorithm to multiple non-negative matrix factorization with the $\ell_0$ norm, nonnegative tensor decomposition with the $\ell_0$ norm, and perform extensive numerical experiments to validate its effectiveness and efficiency.

The (bitwise) complement $\overline{x}$ of a binary word $x$ is obtained by changing each $0$ in $x$ to $1$ and vice versa. An $\textit{antisquare}$ is a nonempty word of the form $x\, \overline{x}$. In this paper, we study infinite binary words that do not contain arbitrarily large antisquares. For example, we show that the repetition threshold for the language of infinite binary words containing exactly two distinct antisquares is $(5+\sqrt{5})/2$. We also study repetition thresholds for related classes, where "two" in the previous sentence is replaced by a larger number. We say a binary word is $\textit{good}$ if the only antisquares it contains are $01$ and $10$. We characterize the minimal antisquares, that is, those words that are antisquares but all proper factors are good. We determine the growth rate of the number of good words of length $n$ and determine the repetition threshold between polynomial and exponential growth for the number of good words.

A private information retrieval (PIR) scheme allows a client to retrieve a data item $x_i$ among $n$ items $x_1,x_2,\ldots,x_n$ from $k$ servers, without revealing what $i$ is even when $t < k$ servers collude and try to learn $i$. Such a PIR scheme is said to be $t$-private. A PIR scheme is $v$-verifiable if the client can verify the correctness of the retrieved $x_i$ even when $v \leq k$ servers collude and try to fool the client by sending manipulated data. Most of the previous works in the literature on PIR assumed that $v < k$, leaving the case of all-colluding servers open. We propose a generic construction that combines a linear map commitment (LMC) and an arbitrary linear PIR scheme to produce a $k$-verifiable PIR scheme, termed a committed PIR scheme. Such a scheme guarantees that even in the worst scenario, when all servers are under the control of an attacker, although the privacy is unavoidably lost, the client won't be fooled into accepting an incorrect $x_i$. We demonstrate the practicality of our proposal by implementing the committed PIR schemes based on the Lai-Malavolta LMC and three well-known PIR schemes using the GMP library and blst, the current fastest C library for elliptic curve pairings.

Chain-of-Though (CoT) prompting has shown promising performance in various reasoning tasks. Recently, Self-Consistency \citep{wang2023selfconsistency} proposes to sample a diverse set of reasoning chains which may lead to different answers while the answer that receives the most votes is selected. In this paper, we propose a novel method to use backward reasoning in verifying candidate answers. We mask a token in the question by ${\bf x}$ and ask the LLM to predict the masked token when a candidate answer is provided by \textit{a simple template}, i.e., "\textit{\textbf{If we know the answer of the above question is \{a candidate answer\}, what is the value of unknown variable ${\bf x}$?}}" Intuitively, the LLM is expected to predict the masked token successfully if the provided candidate answer is correct. We further propose FOBAR to combine forward and backward reasoning for estimating the probability of candidate answers. We conduct extensive experiments on six data sets and three LLMs. Experimental results demonstrate that FOBAR achieves state-of-the-art performance on various reasoning benchmarks.

We give an algorithm that, given an $n$-vertex graph $G$ and an integer $k$, in time $2^{O(k)} n$ either outputs a tree decomposition of $G$ of width at most $2k + 1$ or determines that the treewidth of $G$ is larger than $k$. This is the first 2-approximation algorithm for treewidth that is faster than the known exact algorithms, and in particular improves upon the previous best approximation ratio of 5 in time $2^{O(k)} n$ given by Bodlaender et al. [SIAM J. Comput., 45 (2016)]. Our algorithm works by applying incremental improvement operations to a tree decomposition, using an approach inspired by a proof of Bellenbaum and Diestel [Comb. Probab. Comput., 11 (2002)].

The $k$-dimensional Weisfeiler-Leman ($k$-WL) algorithm is a simple combinatorial algorithm that was originally designed as a graph isomorphism heuristic. It naturally finds applications in Babai's quasipolynomial time isomorphism algorithm, practical isomorphism solvers, and algebraic graph theory. However, it also has surprising connections to other areas such as logic, proof complexity, combinatorial optimization, and machine learning. The algorithm iteratively computes a coloring of the $k$-tuples of vertices of a graph. Since F\"urer's linear lower bound [ICALP 2001], it has been an open question whether there is a super-linear lower bound for the iteration number for $k$-WL on graphs. We answer this question affirmatively, establishing an $\Omega(n^{k/2})$-lower bound for all $k$.

The Ramsey number is the minimum number of nodes, $n = R(s, t)$, such that all undirected simple graphs of order $n$, contain a clique of order $s$, or an independent set of order $t$. This paper explores the application of a best first search algorithm and reinforcement learning (RL) techniques to find counterexamples to specific Ramsey numbers. We incrementally improve over prior search methods such as random search by introducing a graph vectorization and deep neural network (DNN)-based heuristic, which gauge the likelihood of a graph being a counterexample. The paper also proposes algorithmic optimizations to confine a polynomial search runtime. This paper does not aim to present new counterexamples but rather introduces and evaluates a framework supporting Ramsey counterexample exploration using other heuristics. Code and methods are made available through a PyPI package and GitHub repository.

北京阿比特科技有限公司