Since polar codes were proposed, improving the performance of polar codes at limited code lengths has received significant attention. One of the effective solutions is a series of list flip decoders proposed in recent years. To further enhance performance, we proposed a parity-check-aided dynamic successive cancellation list flip (PC-DSCLF) decoder in this paper. First, we designed a simplified flip metric, and proved by simulations that this simplification hardly affects the error-correction performance of list flip decoders. Subsequently, we optimized the existing allocation scheme for parity check (PC) bits, and then designed the first multi-PC-aided scheme with the dynamic characteristic for list flip decoders. The dynamic characteristic refers to an excellent ability to correct higher-order errors, which is beneficial for error-correction performance improvement. Meantime, the multi-PC-aided scheme to list flip decoders brings more flexible distributed check bits, which can narrow down the range for searching error bits and achieve a more efficient early termination. Simulation results showed that without error-correction performance loss, PC-DSCLF decoder shows up to 51.1% average complexity gain with respect to the state-of-the-art list flip decoder at practical code lengths. Lower average complexity leads to lower average energy consumption and lower average decoding delay.
Inverse problems are in many cases solved with optimization techniques. When the underlying model is linear, first-order gradient methods are usually sufficient. With nonlinear models, due to nonconvexity, one must often resort to second-order methods that are computationally more expensive. In this work we aim to approximate a nonlinear model with a linear one and correct the resulting approximation error. We develop a sequential method that iteratively solves a linear inverse problem and updates the approximation error by evaluating it at the new solution. This treatment convexifies the problem and allows us to benefit from established convex optimization methods. We separately consider cases where the approximation is fixed over iterations and where the approximation is adaptive. In the fixed case we show theoretically under what assumptions the sequence converges. In the adaptive case, particularly considering the special case of approximation by first-order Taylor expansion, we show that with certain assumptions the sequence converges to a critical point of the original nonconvex functional. Furthermore, we show that with quadratic objective functions the sequence corresponds to the Gauss-Newton method. Finally, we showcase numerical results superior to the conventional model correction method. We also show, that a fixed approximation can provide competitive results with considerable computational speed-up.
We consider the problem of authenticated communication over a discrete arbitrarily varying channel where the legitimate parties are unaware of whether or not an adversary is present. When there is no adversary, the channel state always takes a default value $s_0$. When the adversary is present, they may choose the channel state sequence based on a non-causal noisy view of the transmitted codewords and the encoding and decoding scheme. We require that the decoder output the correct message with a high probability when there is no adversary, and either output the correct message or reject the transmission when the adversary is present. Further, we allow the transmitter to employ private randomness during encoding that is known neither to the receiver nor the adversary. Our first result proves a dichotomy property for the capacity for this problem -- the capacity either equals zero or it equals the non-adversarial capacity of the channel. Next, we give a sufficient condition for the capacity for this problem to be positive even when the non-adversarial channel to the receiver is stochastically degraded with respect to the channel to the adversary. Our proofs rely on a connection to a standalone authentication problem, where the goal is to accept or reject a candidate message that is already available to the decoder. Finally, we give examples and compare our sufficient condition with other related conditions known in the literature
In this paper, we consider a multiple-input multiple-output (MIMO) radar system for localizing a target based on its reflected echo signals. Specifically, we aim to estimate the random and unknown angle information of the target, by exploiting its prior distribution information. First, we characterize the estimation performance by deriving the posterior Cram\'er-Rao bound (PCRB), which quantifies a lower bound of the estimation mean-squared error (MSE). Since the PCRB is in a complicated form, we derive a tight upper bound of it to approximate the estimation performance. Based on this, we analytically show that by exploiting the prior distribution information, the PCRB is always no larger than the CRB averaged over random angle realizations without prior information exploitation. Next, we formulate the transmit signal optimization problem to minimize the PCRB upper bound. We show that the optimal sample covariance matrix has a rank-one structure, and derive the optimal signal solution in closed form. Numerical results show that our proposed design achieves significantly improved PCRB performance compared to various benchmark schemes.
Recently, multi-modality scene perception tasks, e.g., image fusion and scene understanding, have attracted widespread attention for intelligent vision systems. However, early efforts always consider boosting a single task unilaterally and neglecting others, seldom investigating their underlying connections for joint promotion. To overcome these limitations, we establish the hierarchical dual tasks-driven deep model to bridge these tasks. Concretely, we firstly construct an image fusion module to fuse complementary characteristics and cascade dual task-related modules, including a discriminator for visual effects and a semantic network for feature measurement. We provide a bi-level perspective to formulate image fusion and follow-up downstream tasks. To incorporate distinct task-related responses for image fusion, we consider image fusion as a primary goal and dual modules as learnable constraints. Furthermore, we develop an efficient first-order approximation to compute corresponding gradients and present dynamic weighted aggregation to balance the gradients for fusion learning. Extensive experiments demonstrate the superiority of our method, which not only produces visually pleasant fused results but also realizes significant promotion for detection and segmentation than the state-of-the-art approaches.
Not everybody can be equipped with professional photography skills and sufficient shooting time, and there can be some tilts in the captured images occasionally. In this paper, we propose a new and practical task, named Rotation Correction, to automatically correct the tilt with high content fidelity in the condition that the rotated angle is unknown. This task can be easily integrated into image editing applications, allowing users to correct the rotated images without any manual operations. To this end, we leverage a neural network to predict the optical flows that can warp the tilted images to be perceptually horizontal. Nevertheless, the pixel-wise optical flow estimation from a single image is severely unstable, especially in large-angle tilted images. To enhance its robustness, we propose a simple but effective prediction strategy to form a robust elastic warp. Particularly, we first regress the mesh deformation that can be transformed into robust initial optical flows. Then we estimate residual optical flows to facilitate our network the flexibility of pixel-wise deformation, further correcting the details of the tilted images. To establish an evaluation benchmark and train the learning framework, a comprehensive rotation correction dataset is presented with a large diversity in scenes and rotated angles. Extensive experiments demonstrate that even in the absence of the angle prior, our algorithm can outperform other state-of-the-art solutions requiring this prior. The code and dataset are available at //github.com/nie-lang/RotationCorrection.
In 2021 Ethereum adjusted the transaction pricing mechanism by implementing EIP-1559, which introduces the base fee - a fixed network fee per block that is burned and adjusted dynamically in accordance with network demand. The authors of the Ethereum Improvement Proposal (EIP) noted that a miner with more than 50% of the mining power might have an incentive to deviate from the honest mining strategy. Instead, such a miner could propose a series of empty blocks to increase its future rewards. In this paper, we generalize this attack and show that under rational player behavior, deviating from the honest strategy can be profitable for a miner with less than 50% of the mining power. Further, even when miners do not collaborate, it is rational for smaller mining power miners to join the attack.
Asynchronous Byzantine Atomic Broadcast (ABAB) promises, in comparison to partially synchronous approaches, simplicity in implementation, increased performance, and increased robustness. For partially synchronous approaches, it is well-known that small Trusted Execution Environments (TEE), e.g., MinBFT's unique sequential identifier generator (USIG), are capable of reducing the communication effort while increasing the fault tolerance. For ABAB, the research community assumes that the use of TEEs increases performance and robustness. However, despite the existence of a fault-model compiler, a concrete TEE-based approach is not directly available yet. In this brief announcement, we show that the recently proposed DAG-Rider approach can be transformed to provide ABAB with $n\geq 2f+1$ processes, of which $f$ are faulty. We leverage MinBFT's USIG to implement Reliable Broadcast with $n>f$ processes and show that the quorum-critical proofs of DAG-Rider still hold when adapting the quorum size to $\lfloor \frac{n}{2} \rfloor + 1$.
The fidelity-based smooth min-relative entropy is a distinguishability measure that has appeared in a variety of contexts in prior work on quantum information, including resource theories like thermodynamics and coherence. Here we provide a comprehensive study of this quantity. First we prove that it satisfies several basic properties, including the data-processing inequality. We also establish connections between the fidelity-based smooth min-relative entropy and other widely used information-theoretic quantities, including smooth min-relative entropy and smooth sandwiched R\'enyi relative entropy, of which the sandwiched R\'enyi relative entropy and smooth max-relative entropy are special cases. After that, we use these connections to establish the second-order asymptotics of the fidelity-based smooth min-relative entropy and all smooth sandwiched R\'enyi relative entropies, finding that the first-order term is the quantum relative entropy and the second-order term involves the quantum relative entropy variance. Utilizing the properties derived, we also show how the fidelity-based smooth min-relative entropy provides one-shot bounds for operational tasks in general resource theories in which the target state is mixed, with a particular example being randomness distillation. The above observations then lead to second-order expansions of the upper bounds on distillable randomness, as well as the precise second-order asymptotics of the distillable randomness of particular classical-quantum states. Finally, we establish semi-definite programs for smooth max-relative entropy and smooth conditional min-entropy, as well as a bilinear program for the fidelity-based smooth min-relative entropy, which we subsequently use to explore the tightness of a bound relating the last to the first.
Given $\mathbf A \in \mathbb{R}^{n \times n}$ with entries bounded in magnitude by $1$, it is well-known that if $S \subset [n] \times [n]$ is a uniformly random subset of $\tilde{O} (n/\epsilon^2)$ entries, and if ${\mathbf A}_S$ equals $\mathbf A$ on the entries in $S$ and is zero elsewhere, then $\|\mathbf A - \frac{n^2}{s} \cdot {\mathbf A}_S\|_2 \le \epsilon n$ with high probability, where $\|\cdot\|_2$ is the spectral norm. We show that for positive semidefinite (PSD) matrices, no randomness is needed at all in this statement. Namely, there exists a fixed subset $S$ of $\tilde{O} (n/\epsilon^2)$ entries that acts as a universal sparsifier: the above error bound holds simultaneously for every bounded entry PSD matrix $\mathbf A \in \mathbb{R}^{n \times n}$. One can view this result as a significant extension of a Ramanujan expander graph, which sparsifies any bounded entry PSD matrix, not just the all ones matrix. We leverage the existence of such universal sparsifiers to give the first deterministic algorithms for several central problems related to singular value computation that run in faster than matrix multiplication time. We also prove universal sparsification bounds for non-PSD matrices, showing that $\tilde{O} (n/\epsilon^4)$ entries suffices to achieve error $\epsilon \cdot \max(n,\|\mathbf A\|_1)$, where $\|\mathbf A\|_1$ is the trace norm. We prove that this is optimal up to an $\tilde{O} (1/\epsilon^2)$ factor. Finally, we give an improved deterministic spectral approximation algorithm for PSD $\mathbf A$ with entries lying in $\{-1,0,1\}$, which we show is nearly information-theoretically optimal.
Recently, ensemble has been applied to deep metric learning to yield state-of-the-art results. Deep metric learning aims to learn deep neural networks for feature embeddings, distances of which satisfy given constraint. In deep metric learning, ensemble takes average of distances learned by multiple learners. As one important aspect of ensemble, the learners should be diverse in their feature embeddings. To this end, we propose an attention-based ensemble, which uses multiple attention masks, so that each learner can attend to different parts of the object. We also propose a divergence loss, which encourages diversity among the learners. The proposed method is applied to the standard benchmarks of deep metric learning and experimental results show that it outperforms the state-of-the-art methods by a significant margin on image retrieval tasks.