亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Deterministic identification (DI) for the discrete-time Poisson channel, subject to an average and a peak power constraint, is considered. It is established that the code size scales as $2^{(n\log n)R}$, where $n$ and $R$ are the block length and coding rate, respectively. The authors have recently shown a similar property for Gaussian channels [1]. Lower and upper bounds on the DI capacity of the Poisson channel are developed in this scale. Those imply that the DI capacity is infinite in the exponential scale, regardless of the dark current, i.e., the channel noise parameter.

相關內容

We present deterministic algorithms for maintaining a $(3/2 + \epsilon)$ and $(2 + \epsilon)$-approximate maximum matching in a fully dynamic graph with worst-case update times $\hat{O}(\sqrt{n})$ and $\tilde{O}(1)$ respectively. The fastest known deterministic worst-case update time algorithms for achieving approximation ratio $(2 - \delta)$ (for any $\delta > 0$) and $(2 + \epsilon)$ were both shown by Roghani et al. [2021] with update times $O(n^{3/4})$ and $O_\epsilon(\sqrt{n})$ respectively. We close the gap between worst-case and amortized algorithms for the two approximation ratios as the best deterministic amortized update times for the problem are $O_\epsilon(\sqrt{n})$ and $\tilde{O}(1)$ which were shown in Bernstein and Stein [SODA'2021] and Bhattacharya and Kiss [ICALP'2021] respectively. In order to achieve both results we explicitly state a method implicitly used in Nanongkai and Saranurak [STOC'2017] and Bernstein et al. [arXiv'2020] which allows to transform dynamic algorithms capable of processing the input in batches to a dynamic algorithms with worst-case update time. \textbf{Independent Work:} Independently and concurrently to our work Grandoni et al. [arXiv'2021] has presented a fully dynamic algorithm for maintaining a $(3/2 + \epsilon)$-approximate maximum matching with deterministic worst-case update time $O_\epsilon(\sqrt{n})$.

Polar codes are normally designed based on the reliability of the sub-channels in the polarized vector channel. There are various methods with diverse complexity and accuracy to evaluate the reliability of the sub-channels. However, designing polar codes solely based on the sub-channel reliability may result in poor Hamming distance properties. In this work, we propose a different approach to design the information set for polar codes and PAC codes where the objective is to reduce the number of codewords with minimum weight (a.k.a. error coefficient) of a code designed for maximum reliability. This approach is based on the coset-wise characterization of the rows of polar transform $\mathbf{G}_N$ involved in the formation of the minimum-weight codewords. Our analysis capitalizes on the properties of the polar transform based on its row and column indices. The numerical results show that the designed codes outperform PAC codes and CRC-Polar codes at the practical block error rate of $10^{-2}-10^{-3}$. Furthermore, a by-product of the combinatorial properties analyzed in this paper is an alternative enumeration method of the minimum-weight codewords.

Inverse optimization is the problem of determining the values of missing input parameters for an associated forward problem that are closest to given estimates and that will make a given target vector optimal. This study is concerned with the relationship of a particular inverse mixed integer linear optimization problem (MILP) to both the forward problem and the separation problem associated with its feasible region. We show that a decision version of the inverse MILP in which a primal bound is verified is coNP-complete, whereas primal bound verification for the associated forward problem is NP-complete, and that the optimal value verification problems for both the inverse problem and the associated forward problem are complete for the complexity class D^P. We also describe a cutting-plane algorithm for solving inverse MILPs that illustrates the close relationship between the separation problem for the convex hull of solutions to a given MILP and the associated inverse problem. The inverse problem is shown to be equivalent to the separation problem for the radial cone defined by all inequalities that are both valid for the convex hull of solutions to the forward problem and binding at the target vector. Thus, the inverse, forward, and separation problems can be said to be equivalent.

The celebrated Bayesian persuasion model considers strategic communication between an informed agent (the sender) and uniformed decision makers (the receivers). The current rapidly-growing literature assumes a dichotomy: either the sender is powerful enough to communicate separately with each receiver (a.k.a. private persuasion), or she cannot communicate separately at all (a.k.a. public persuasion). We propose a model that smoothly interpolates between the two, by introducing a natural multi-channel communication structure in which each receiver observes a subset of the sender's communication channels. This captures, e.g., receivers on a network, where information spillover is almost inevitable. We completely characterize when one communication structure is better for the sender than another, in the sense of yielding higher optimal expected utility universally over all prior distributions and utility functions. The characterization is based on a simple pairwise relation among receivers - one receiver information-dominates another if he observes at least the same channels. We prove that a communication structure M_1 is (weakly) better than M_2 if and only if every information-dominating pair of receivers in M_1 is also such in M_2. We also provide an additive FPTAS for the optimal sender's signaling scheme when the number of states is constant and the graph of information-dominating pairs is a directed forest. Finally, we prove that finding an optimal signaling scheme under multi-channel persuasion is computationally hard for a general family of sender's utility functions that admit computationally tractable optimal signaling schemes under both public and private persuasion.

Shannon-Hartley theorem can accurately calculate the channel capacity when the signal observation time is infinite. However, the calculation of finite-time capacity, which remains unknown, is essential for guiding the design of practical communication systems. In this paper, we investigate the capacity between two correlated Gaussian processes within a finite-time observation window. We first derive the finite-time capacity by providing a limit expression. Then we numerically compute the maximum transmission rate within a single finite-time window. We reveal that the number of bits transmitted per second within the finite-time window can exceed the classical Shannon capacity, which is called as the Exceed-Shannon phenomenon. Furthermore, we derive a finite-time capacity formula under a typical signal autocorrelation case by utilizing the Mercer expansion of trace class operators, and reveal the connection between the finite-time capacity problem and the operator theory. Finally, we analytically prove the existence of the Exceed-Shannon phenomenon in this typical case, and demonstrate the achievability of the finite-time capacity and its compatibility with the classical Shannon capacity.

This paper considers the performance of long Reed-Muller (RM) codes transmitted over binary memoryless symmetric (BMS) channels under bitwise maximum-a-posteriori decoding. Its main result is that the family of binary RM codes achieves capacity on any BMS channel with respect to bit-error rate. This resolves a long-standing open problem that connects information theory and error-correcting codes. In contrast with the earlier result for the binary erasure channel, the new proof does not rely on hypercontractivity. Instead, it combines a nesting property of RM codes with new information inequalities relating the generalized extrinsic information transfer function and the extrinsic minimum mean-squared error.

Spanners have been shown to be a powerful tool in graph algorithms. Many spanner constructions use a certain type of clustering at their core, where each cluster has small diameter and there are relatively few spanner edges between clusters. In this paper, we provide a clustering algorithm that, given $k\geq 2$, can be used to compute a spanner of stretch $2k-1$ and expected size $O(n^{1+1/k})$ in $k$ rounds in the CONGEST model. This improves upon the state of the art (by Elkin, and Neiman [TALG'19]) by making the bounds on both running time and stretch independent of the random choices of the algorithm, whereas they only hold with high probability in previous results. Spanners are used in certain synchronizers, thus our improvement directly carries over to such synchronizers. Furthermore, for keeping the \emph{total} number of inter-cluster edges small in low diameter decompositions, our clustering algorithm provides the following guarantees. Given $\beta\in (0,1]$, we compute a low diameter decomposition with diameter bound $O\left(\frac{\log n}{\beta}\right)$ such that each edge $e\in E$ is an inter-cluster edge with probability at most $\beta\cdot w(e)$ in $O\left(\frac{\log n}{\beta}\right)$ rounds in the CONGEST model. Again, this improves upon the state of the art (by Miller, Peng, and Xu [SPAA'13]) by making the bounds on both running time and diameter independent of the random choices of the algorithm, whereas they only hold with high probability in previous results.

Polar codes are normally designed based on the reliability of the sub-channels in the polarized vector channel. There are various methods with diverse complexity and accuracy to evaluate the reliability of the sub-channels. However, designing polar codes solely based on the sub-channel reliability may result in poor Hamming distance properties. In this work, we propose a different approach to design the information set for polar codes and PAC codes where the objective is to reduce the number of codewords with minimum weight (a.k.a. error coefficient) of a code designed for maximum reliability. This approach is based on the coset-wise characterization of the rows of polar transform $\mathbf{G}_N$ involved in the formation of the minimum-weight codewords. Our analysis capitalizes on the properties of the polar transform based on its row and column indices. The numerical results show that the designed codes outperform PAC codes and CRC-Polar codes at the practical block error rate of $10^{-2}-10^{-3}$. Furthermore, a by-product of the combinatorial properties analyzed in this paper is an alternative enumeration method of the minimum-weight codewords.

We consider the exploration-exploitation trade-off in reinforcement learning and we show that an agent imbued with a risk-seeking utility function is able to explore efficiently, as measured by regret. The parameter that controls how risk-seeking the agent is can be optimized exactly, or annealed according to a schedule. We call the resulting algorithm K-learning and show that the corresponding K-values are optimistic for the expected Q-values at each state-action pair. The K-values induce a natural Boltzmann exploration policy for which the `temperature' parameter is equal to the risk-seeking parameter. This policy achieves an expected regret bound of $\tilde O(L^{3/2} \sqrt{S A T})$, where $L$ is the time horizon, $S$ is the number of states, $A$ is the number of actions, and $T$ is the total number of elapsed time-steps. This bound is only a factor of $L$ larger than the established lower bound. K-learning can be interpreted as mirror descent in the policy space, and it is similar to other well-known methods in the literature, including Q-learning, soft-Q-learning, and maximum entropy policy gradient, and is closely related to optimism and count based exploration methods. K-learning is simple to implement, as it only requires adding a bonus to the reward at each state-action and then solving a Bellman equation. We conclude with a numerical example demonstrating that K-learning is competitive with other state-of-the-art algorithms in practice.

Deep convolutional neural networks (CNNs) have demonstrated dominant performance in person re-identification (Re-ID). Existing CNN based methods utilize global average pooling (GAP) to aggregate intermediate convolutional features for Re-ID. However, this strategy only considers the first-order statistics of local features and treats local features at different locations equally important, leading to sub-optimal feature representation. To deal with these issues, we propose a novel \emph{weighted bilinear coding} (WBC) model for local feature aggregation in CNN networks to pursue more representative and discriminative feature representations. In specific, bilinear coding is used to encode the channel-wise feature correlations to capture richer feature interactions. Meanwhile, a weighting scheme is applied on the bilinear coding to adaptively adjust the weights of local features at different locations based on their importance in recognition, further improving the discriminability of feature aggregation. To handle the spatial misalignment issue, we use a salient part net to derive salient body parts, and apply the WBC model on each part. The final representation, formed by concatenating the WBC eoncoded features of each part, is both discriminative and resistant to spatial misalignment. Experiments on three benchmarks including Market-1501, DukeMTMC-reID and CUHK03 evidence the favorable performance of our method against other state-of-the-art methods.

北京阿比特科技有限公司