亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

An error correcting code ($\mathsf{ECC}$) allows a sender to send a message to a receiver such that even if a constant fraction of the communicated bits are corrupted, the receiver can still learn the message correctly. Due to their importance and fundamental nature, $\mathsf{ECC}$s have been extensively studied, one of the main goals being to maximize the fraction of errors that the $\mathsf{ECC}$ is resilient to. For adversarial erasure errors (over a binary channel) the maximal error resilience of an $\mathsf{ECC}$ is $\frac12$ of the communicated bits. In this work, we break this $\frac12$ barrier by introducing the notion of an interactive error correcting code ($\mathsf{iECC}$) and constructing an $\mathsf{iECC}$ that is resilient to adversarial erasure of $\frac35$ of the total communicated bits. We emphasize that the adversary can corrupt both the sending party and the receiving party, and that both parties' rounds contribute to the adversary's budget. We also prove an impossibility (upper) bound of $\frac23$ on the maximal resilience of any binary $\mathsf{iECC}$ to adversarial erasures. In the bit flip setting, we prove an impossibility bound of $\frac27$.

相關內容

In this paper, a new problem of transmitting information over the adversarial insertion-deletion channel with feedback is introduced. Suppose that the encoder transmits $n$ binary symbols one-by-one over a channel, in which some symbols can be deleted and some additional symbols can be inserted. After each transmission, the encoder is notified about the insertions or deletions that have occurred within the previous transmission and the encoding strategy can be adapted accordingly. The goal is to design an encoder that is able to transmit error-free as much information as possible under the assumption that the total number of deletions and insertions is limited by $\tau n$, $0<\tau<1$. We show how this problem can be reduced to the problem of transmitting messages over the substitution channel. Thereby, the maximal asymptotic rate of feedback insertion-deletion codes is completely established. The maximal asymptotic rate for the adversarial substitution channel has been partially determined by Berlekamp and later finished by Zigangirov. However, the analysis of the lower bound by Zigangirov is quite complicated. We revisit Zigangirov's result and present a more elaborate version of his proof.

In this paper, we discuss two-stage encoding algorithms capable of correcting a fraction of asymmetric errors. Suppose that the encoder transmits $n$ binary symbols $(x_1,\ldots,x_n)$ one-by-one over the Z-channel, in which a 1 is received only if a 1 is transmitted. At some designated moment, say $n_1$, the encoder uses noiseless feedback and adjusts further encoding strategy based on the partial output of the channel $(y_1,\ldots,y_{n_1})$. The goal is to transmit error-free as much information as possible under the assumption that the total number of errors inflicted by the Z-channel is limited by $\tau n$, $0<\tau<1$. We propose an encoding strategy that uses a list-decodable code at the first stage and a high-error low-rate code at the second stage. This strategy and our converse result yield that there is a sharp transition at $\tau=\max\limits_{0<w<1}\frac{w + w^3}{1+4w^3}\approx 0.44$ from positive rate to zero rate for two-stage encoding strategies. As side results, we derive bounds on the size of list-decodable codes for the Z-channel and prove that for a fraction $1/4+\epsilon$ of asymmetric errors, an error-correcting code contains at most $O(\epsilon^{-3/2})$ codewords.

In this paper, combinatorial quantitative group testing (QGT) with noisy measurements is studied. The goal of QGT is to detect defective items from a data set of size $n$ with counting measurements, each of which counts the number of defects in a selected pool of items. While most literatures consider either probabilistic QGT with random noise or combinatorial QGT with noiseless measurements, our focus is on the combinatorial QGT with noisy measurements that might be adversarially perturbed by additive bounded noises. Since perfect detection is impossible, a partial detection criterion is adopted. With the adversarial noise being bounded by $d_n = \Theta(n^\delta)$ and the detection criterion being to ensure no more than $k_n = \Theta(n^\kappa)$ errors can be made, our goal is to characterize the fundamental limit on the number of measurement, termed \emph{pooling complexity}, as well as provide explicit construction of measurement plans with optimal pooling complexity and efficient decoding algorithms. We first show that the fundamental limit is $\frac{1}{1-2\delta}\frac{n}{\log n}$ to within a constant factor not depending on $(n,\kappa,\delta)$ for the non-adaptive setting when $0<2\delta\leq \kappa <1$, sharpening the previous result by Chen and Wang [2]. We also provide an explicit construction of a non-adaptive deterministic measurement plan with $\frac{1}{1-2\delta}\frac{n}{\log_{2} n}$ pooling complexity up to a constant factor, matching the fundamental limit, with decoding complexity being $o(n^{1+\rho})$ for all $\rho > 0$, nearly linear in $n$, the size of the data set.

When an adversary gets access to the data sample in the adversarial robustness models and can make data-dependent changes, how has the decision maker consequently, relying deeply upon the adversarially-modified data, to make statistical inference? How can the resilience and elasticity of the network be literally justified from a game theoretical viewpoint $-$ if there exists a tool to measure the aforementioned elasticity? The principle of byzantine resilience distributed hypothesis testing (BRDHT) is considered in this paper for cognitive radio networks (CRNs) $-$ without-loss-of-generality, something that can be extended to any type of homogeneous or heterogeneous networks. We use the temporal rate of the $\alpha-$leakage as the appropriate tool which we measure the aforementioned resilience through. We take into account the main problem from an information theoretic point of view via an exploration over the \textit{adversarial robustness} of distributed hypothesis testing rules. We chiefly examine if one can write $\mathbb{F}=ma$ for the main problem, consequently, we define a nested bi-level $-$ even 3-level including a hidden control-law $-$ mean-field-game (MFG) realisation solving the control dynamics as well. Further discussions are also provided e.g. the synchronisation. Our novel online algorithm $-$ which is named $\mathbb{OBRDHT}$ $-$ and solution are both unique and generic over which an evaluation is finally performed by simulations.

The aim of this thesis is to develop a theoretical framework to study parameter estimation of quantum channels. We study the task of estimating unknown parameters encoded in a channel in the sequential setting. A sequential strategy is the most general way to use a channel multiple times. Our goal is to establish lower bounds (called Cramer-Rao bounds) on the estimation error. The bounds we develop are universally applicable; i.e., they apply to all permissible quantum dynamics. We consider the use of catalysts to enhance the power of a channel estimation strategy. This is termed amortization. The power of a channel for a parameter estimation is determined by its Fisher information. Thus, we study how much a catalyst quantum state can enhance the Fisher information of a channel by defining the amortized Fisher information. We establish our bounds by proving that for certain Fisher information quantities, catalyst states do not improve the performance of a sequential estimation protocol compared to a parallel one. The technical term for this is an amortization collapse. We use this to establish bounds when estimating one parameter, or multiple parameters simultaneously. Our bounds apply universally and we also cast them as optimization problems. For the single parameter case, we establish bounds for general quantum channels using both the symmetric logarithmic derivative (SLD) Fisher information and the right logarithmic derivative (RLD) Fisher information. The task of estimating multiple parameters simultaneously is more involved than the single parameter case, because the Cramer-Rao bounds take the form of matrix inequalities. We establish a scalar Cramer-Rao bound for multiparameter channel estimation using the RLD Fisher information. For both single and multiparameter estimation, we provide a no-go condition for the so-called Heisenberg scaling using our RLD-based bound.

$\newcommand{\Emph}[1]{{\it{#1}}} \newcommand{\FF}{\mathcal{F}}\newcommand{\region}{\mathsf{r}}\newcommand{\restrictY}[2]{#1 \cap {#2}}$For a set of points $P \subseteq \mathbb{R}^2$, and a family of regions $\FF$, a $\Emph{local~t-spanner}$ of $P$, is a sparse graph $G$ over $P$, such that, for any region $\region \in \FF$, the subgraph restricted to $\region$, denoted by $\restrictY{G}{\region} = G_{P \cap \region}$, is a $t$-spanner for all the points of $\region \cap P$. We present algorithms for the construction of local spanners with respect to several families of regions, such as homothets of a convex region. Unfortunately, the number of edges in the resulting graph depends logarithmically on the spread of the input point set. We prove that this dependency can not be removed, thus settling an open problem raised by Abam and Borouny. We also show improved constructions (with no dependency on the spread) of local spanners for fat triangles, and regular $k$-gons. In particular, this improves over the known construction for axis parallel squares. We also study a somewhat weaker notion of local spanner where one allows to shrink the region a "bit". Any spanner is a weak local spanner if the shrinking is proportional to the diameter. Surprisingly, we show a near linear size construction of a weak spanner for axis-parallel rectangles, where the shrinkage is $\Emph{multiplicative}$.

Generative adversarial networks (GANs) with clustered latent spaces can perform conditional generation in a completely unsupervised manner. In the real world, the salient attributes of unlabeled data can be imbalanced. However, most of existing unsupervised conditional GANs cannot cluster attributes of these data in their latent spaces properly because they assume uniform distributions of the attributes. To address this problem, we theoretically derive Stein latent optimization that provides reparameterizable gradient estimations of the latent distribution parameters assuming a Gaussian mixture prior in a continuous latent space. Structurally, we introduce an encoder network and novel unsupervised conditional contrastive loss to ensure that data generated from a single mixture component represent a single attribute. We confirm that the proposed method, named Stein Latent Optimization for GANs (SLOGAN), successfully learns balanced or imbalanced attributes and achieves state-of-the-art unsupervised conditional generation performance even in the absence of attribute information (e.g., the imbalance ratio). Moreover, we demonstrate that the attributes to be learned can be manipulated using a small amount of probe data.

Neural networks are known to be vulnerable to adversarial attacks -- slight but carefully constructed perturbations of the inputs which can drastically impair the network's performance. Many defense methods have been proposed for improving robustness of deep networks by training them on adversarially perturbed inputs. However, these models often remain vulnerable to new types of attacks not seen during training, and even to slightly stronger versions of previously seen attacks. In this work, we propose a novel approach to adversarial robustness, which builds upon the insights from the domain adaptation field. Our method, called Adversarial Feature Desensitization (AFD), aims at learning features that are invariant towards adversarial perturbations of the inputs. This is achieved through a game where we learn features that are both predictive and robust (insensitive to adversarial attacks), i.e. cannot be used to discriminate between natural and adversarial data. Empirical results on several benchmarks demonstrate the effectiveness of the proposed approach against a wide range of attack types and attack strengths. Our code is available at //github.com/BashivanLab/afd.

Full Duplex (FD) communication can revolutionize wireless communications as it avoids using independent channels for bi-directional communications. This work generalizes the point-to-point FD communication in millimeter wave (mmWave) band consisting of K-pairs of massive MIMO FD nodes operating simultaneously. We present a novel joint hybrid beamforming (HYBF) and combining scheme for weighted sum-rate (WSR) maximization to enable the coexistence of massive MIMO FD links cost-efficiently. The proposed algorithm relies on alternative optimization based on the minorization-maximization method. Moreover, we present a novel SI and massive MIMO interference channel aware power allocation scheme to include the optimal power control. Simulation results show significant performance improvement compared to a traditional bidirectional fully digital half-duplex (HD) system.

Although Deep Neural Networks (DNNs) have shown incredible performance in perceptive and control tasks, several trustworthy issues are still open. One of the most discussed topics is the existence of adversarial perturbations, which has opened an interesting research line on provable techniques capable of quantifying the robustness of a given input. In this regard, the Euclidean distance of the input from the classification boundary denotes a well-proved robustness assessment as the minimal affordable adversarial perturbation. Unfortunately, computing such a distance is highly complex due the non-convex nature of NNs. Despite several methods have been proposed to address this issue, to the best of our knowledge, no provable results have been presented to estimate and bound the error committed. This paper addresses this issue by proposing two lightweight strategies to find the minimal adversarial perturbation. Differently from the state-of-the-art, the proposed approach allows formulating an error estimation theory of the approximate distance with respect to the theoretical one. Finally, a substantial set of experiments is reported to evaluate the performance of the algorithms and support the theoretical findings. The obtained results show that the proposed strategies approximate the theoretical distance for samples close to the classification boundary, leading to provable robustness guarantees against any adversarial attacks.

北京阿比特科技有限公司