亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The essential interactive capacity of a discrete memoryless channel is defined in this paper as the maximal rate at which the transcript of any interactive protocol can be reliably simulated over the channel, using a deterministic coding scheme. In contrast to other interactive capacity definitions in the literature, this definition makes no assumptions on the order of speakers (which can be adaptive) and does not allow any use of private / public randomness; hence, the essential interactive capacity is a function of the channel model only. It is shown that the essential interactive capacity of any binary memoryless symmetric (BMS) channel is at least $0.0302$ its Shannon capacity. To that end, we present a simple coding scheme, based on extended-Hamming codes combined with error detection, that achieves the lower bound in the special case of the binary symmetric channel (BSC). We then adapt the scheme to the entire family of BMS channels, and show that it achieves the same lower bound using extremes of the Bhattacharyya parameter.

相關內容

IFIP TC13 Conference on Human-Computer Interaction是人機交互領域的研究者和實踐者展示其工作的重要平臺。多年來,這些會議吸引了來自幾個國家和文化的研究人員。官網鏈接: · 離散化 · BASIC · 極小點 · 樣本 ·
2021 年 10 月 12 日

In this paper, we consider several efficient data structures for the problem of sampling from a dynamically changing discrete probability distribution, where some prior information is known on the distribution of the rates, in particular the maximum and minimum rate, and where the number of possible outcomes N is large. We consider three basic data structures, the Acceptance-Rejection method, the Complete Binary Tree and the Alias method. These can be used as building blocks in a multi-level data structure, where at each of the levels, one of the basic data structures can be used, with the top level selecting a group of events, and the bottom level selecting an element from a group. Depending on assumptions on the distribution of the rates of outcomes, different combinations of the basic structures can be used. We prove that for particular data structures the expected time of sampling and update is constant when the rate distribution follows certain conditions. We show that for any distribution, combining a tree structure with the Acceptance-Rejection method, we have an expected time of sampling and update of $O\left(\log\log{r_{max}}/{r_{min}}\right)$ is possible, where $r_{max}$ is the maximum rate and $r_{min}$ the minimum rate. We also discuss an implementation of a Two Levels Acceptance-Rejection data structure, that allows expected constant time for sampling, and amortized constant time for updates, assuming that $r_{max}$ and $r_{min}$ are known and the number of events is sufficiently large. We also present an experimental verification, highlighting the limits given by the constraints of a real-life setting.

Heuristic search-based motion planning algorithms typically discretise the search space in order to solve the shortest path problem. Their performance is closely related to this discretisation. A fine discretisation allows for better approximations of the continuous search space, but makes the search for a solution more computationally costly. A coarser resolution might allow the algorithms to find solutions quickly at the expense of quality. For large state spaces, it can be beneficial to search for solutions across multiple resolutions even though defining the discretisations is challenging. The recently proposed algorithm Multi-Resolution A* (MRA*) searches over multiple resolutions. It traverses large areas of obstacle-free space and escapes local minima at a coarse resolution. It can also navigate so-called narrow passageways at a finer resolution. In this work, we develop AMRA*, an anytime version of MRA*. AMRA* tries to find a solution quickly using the coarse resolution as much as possible. It then refines the solution by relying on the fine resolution to discover better paths that may not have been available at the coarse resolution. In addition to being anytime, AMRA* can also leverage information sharing between multiple heuristics. We prove that AMRA* is complete and optimal (in-the-limit of time) with respect to the finest resolution. We show its performance on 2D grid navigation and 4D kinodynamic planning problems.

We analyze the high-SNR regime of the MxK Network MISO channel in which each transmitter has access to a different channel estimate, possibly with different precision. It has been recently shown that, for some regimes, this setting attains the same Degrees-of-Freedom as the ideal centralized setting with perfect CSI sharing, in which all the transmitters are endowed with the best estimate available at any transmitter. This result is restricted by the limitations of the DoF metric, as it only provides information about the slope of growth of the capacity as a function of the SNR, without any insight about the possible performance at a given SNR. To overcome this limitation, we analyze the affine approximation of the rate on the high-SNR regime for this decentralized Network MISO setting for the antenna configurations in which it achieves the DoF of the centralized setting. We show that, for a regime of antenna configurations, it is possible to asymptotically attain the same achievable rate as in the ideal centralized scenario. Consequently, it is possible to achieve the beamforming gain of the ideal perfect-CSI-sharing setting even if only a subset of transmitters is endowed with precise CSI, which can be exploited in scenarios such as distributed massive MIMO where the number of transmit antennas is much bigger than the number of served users. This outcome is a consequence of the synergistic compromise between CSI precision at the transmitters and consistency between the locally-computed precoders, which is an inherent trade-off of decentralized settings that does not exist in a centralized CSI configuration. We propose a precoding scheme achieving the previous result, which is built on an uneven structure in which some transmitters reduce the precision of their own precoding for the sake of using transmission parameters that can be more easily predicted by the other transmitters.

Modern machine learning systems are traditionally designed and tested with the overall goal of achieving the best possible performance on average. In this work, we consider an approach to building learning systems which treats the question of "how should we quantify good off-sample performance?" as a key design decision. We describe this proposal using a simple and general formulation, place the current dominant paradigm within the proper historical context, and then survey the literature for more recent developments that depart from tradition and can be viewed as special cases of our proposed methodology.

In this paper, we investigate a cell-free massive MIMO system with both access points (APs) and user equipments (UEs) equipped with multiple antennas over jointly correlated Rayleigh fading channels. We study four uplink implementations, from fully centralized processing to fully distributed processing, and derive achievable spectral efficiency (SE) expressions with minimum mean-squared error successive interference cancellation (MMSE-SIC) detectors and arbitrary combining schemes. Furthermore, the global and local MMSE combining schemes are derived based on full and local channel state information (CSI) obtained under pilot contamination, which can maximize the achievable SE for the fully centralized and distributed implementation, respectively. We study a two-layer decoding implementation with an arbitrary combining scheme in the first layer and optimal large-scale fading decoding in the second layer. Besides, we compute novel closed-form SE expressions for the two-layer decoding implementation with maximum ratio combining. We compare the SE of different implementation levels and combining schemes and investigate the effect of having additional UE antennas. Note that increasing the number of antennas per UE may degrade the SE performance and the optimal number of UE antennas maximizing the SE is related to the implementation levels, the length of the resource block, and the number of UEs.

Deterministic identification (DI) is addressed for Gaussian channels with fast and slow fading, where channel side information is available at the decoder. In particular, it is established that the number of messages scales as $2^{n\log(n)R}$, where $n$ is the block length and $R$ is the coding rate. Lower and upper bounds on the DI capacity are developed in this scale for fast and slow fading. Consequently, the DI capacity is infinite in the exponential scale and zero in the double-exponential scale, regardless of the channel noise.

We argue that proven exponential upper bounds on runtimes, an established area in classic algorithms, are interesting also in heuristic search and we prove several such results. We show that any of the algorithms randomized local search, Metropolis algorithm, simulated annealing, and (1+1) evolutionary algorithm can optimize any pseudo-Boolean weakly monotonic function under a large set of noise assumptions in a runtime that is at most exponential in the problem dimension~$n$. This drastically extends a previous such result, limited to the (1+1) EA, the LeadingOnes function, and one-bit or bit-wise prior noise with noise probability at most $1/2$, and at the same time simplifies its proof. With the same general argument, among others, we also derive a sub-exponential upper bound for the runtime of the $(1,\lambda)$ evolutionary algorithm on the OneMax problem when the offspring population size $\lambda$ is logarithmic, but below the efficiency threshold. To show that our approach can also deal with non-trivial parent population sizes, we prove an exponential upper bound for the runtime of the mutation-based version of the simple genetic algorithm on the OneMax benchmark, matching a known exponential lower bound.

There have been significant research activities in recent years to automate the design of channel encoders and decoders via deep learning. Due the dimensionality challenge in channel coding, it is prohibitively complex to design and train relatively large neural channel codes via deep learning techniques. Consequently, most of the results in the literature are limited to relatively short codes having less than 100 information bits. In this paper, we construct ProductAEs, a computationally efficient family of deep-learning driven (encoder, decoder) pairs, that aim at enabling the training of relatively large channel codes (both encoders and decoders) with a manageable training complexity. We build upon the ideas from classical product codes, and propose constructing large neural codes using smaller code components. More specifically, instead of directly training the encoder and decoder for a large neural code of dimension $k$ and blocklength $n$, we provide a framework that requires training neural encoders and decoders for the code parameters $(k_1,n_1)$ and $(k_2,n_2)$ such that $k_1 k_2=k$ and $n_1 n_2=n$. Our training results show significant gains, over all ranges of signal-to-noise ratio (SNR), for a code of parameters $(100,225)$ and a moderate-length code of parameters $(196,441)$, over polar codes under successive cancellation (SC) decoder. Moreover, our results demonstrate meaningful gains over Turbo Autoencoder (TurboAE) and state-of-the-art classical codes. This is the first work to design product autoencoders and a pioneering work on training large channel codes.

The identification capacity is developed without randomization at neither the encoder nor the decoder. In particular, full characterization is established for the deterministic identification (DI) capacity for the Gaussian channel and for the general discrete memoryless channel (DMC) with and without constraints. Originally, Ahlswede and Dueck established the identification capacity with local randomness given at the encoder, resulting in a double exponential number of messages in the block length. In the deterministic setup, the number of messages scales exponentially, as in Shannon's transmission paradigm, but the achievable identification rates can be significantly higher than those of the transmission rates. Ahlswede and Dueck further stated a capacity result for the deterministic setting of a DMC, but did not provide an explicit proof. In this paper, a detailed proof is given for both the Gaussian channel and the general DMC. The DI capacity of a Gaussian channel is infinite regardless of the noise.

We consider the exploration-exploitation trade-off in reinforcement learning and we show that an agent imbued with a risk-seeking utility function is able to explore efficiently, as measured by regret. The parameter that controls how risk-seeking the agent is can be optimized exactly, or annealed according to a schedule. We call the resulting algorithm K-learning and show that the corresponding K-values are optimistic for the expected Q-values at each state-action pair. The K-values induce a natural Boltzmann exploration policy for which the `temperature' parameter is equal to the risk-seeking parameter. This policy achieves an expected regret bound of $\tilde O(L^{3/2} \sqrt{S A T})$, where $L$ is the time horizon, $S$ is the number of states, $A$ is the number of actions, and $T$ is the total number of elapsed time-steps. This bound is only a factor of $L$ larger than the established lower bound. K-learning can be interpreted as mirror descent in the policy space, and it is similar to other well-known methods in the literature, including Q-learning, soft-Q-learning, and maximum entropy policy gradient, and is closely related to optimism and count based exploration methods. K-learning is simple to implement, as it only requires adding a bonus to the reward at each state-action and then solving a Bellman equation. We conclude with a numerical example demonstrating that K-learning is competitive with other state-of-the-art algorithms in practice.

北京阿比特科技有限公司