亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This study investigates the fundamental limits of variable-length compression in which prefix-free constraints are not imposed (i.e., one-to-one codes are studied) and non-vanishing error probabilities are permitted. Due in part to a crucial relation between the variable-length and fixed-length compression problems, our analysis requires a careful and refined analysis of the fundamental limits of fixed-length compression in the setting where the error probabilities are allowed to approach either zero or one polynomially in the blocklength. To obtain the refinements, we employ tools from moderate deviations and strong large deviations. Finally, we provide the third-order asymptotics for the problem of variable-length compression with non-vanishing error probabilities. We show that unlike several other information-theoretic problems in which the third-order asymptotics are known, for the problem of interest here, the third-order term depends on the permissible error probability.

相關內容

這個新版本的工具會議系列恢復了從1989年到2012年的50個會議的傳統。工具最初是“面向對象語言和系統的技術”,后來發展到包括軟件技術的所有創新方面。今天許多最重要的軟件概念都是在這里首次引入的。2019年TOOLS 50+1在俄羅斯喀山附近舉行,以同樣的創新精神、對所有與軟件相關的事物的熱情、科學穩健性和行業適用性的結合以及歡迎該領域所有趨勢和社區的開放態度,延續了該系列。 官網鏈接: · MoDELS · 預測器/決策函數 · 優化器 · 數據縮減 ·
2021 年 11 月 28 日

Error-bounded lossy compression is one of the most effective techniques for scientific data reduction. However, the traditional trial-and-error approach used to configure lossy compressors for finding the optimal trade-off between reconstructed data quality and compression ratio is prohibitively expensive. To resolve this issue, we develop a general-purpose analytical ratio-quality model based on the prediction-based lossy compression framework, which can effectively foresee the reduced data quality and compression ratio, as well as the impact of the lossy compressed data on post-hoc analysis quality. Our analytical model significantly improves the prediction-based lossy compression in three use-cases: (1) optimization of predictor by selecting the best-fit predictor; (2) memory compression with a target ratio; and (3) in-situ compression optimization by fine-grained error-bound tuning of various data partitions. We evaluate our analytical model on 10 scientific datasets, demonstrating its high accuracy (93.47% accuracy on average) and low computational cost (up to 18.7X lower than the trial-and-error approach) for estimating the compression ratio and the impact of lossy compression on post-hoc analysis quality. We also verified the high efficiency of our ratio-quality model using different applications across the three use-cases. In addition, the experiment demonstrates that our modeling based approach reduces the time to store the 3D Reverse Time Migration data by up to 3.4X over the traditional solution using 128 CPU cores from 8 compute nodes.

In blind compression of quantum states, a sender Alice is given a specimen of a quantum state $\rho$ drawn from a known ensemble (but without knowing what $\rho$ is), and she transmits sufficient quantum data to a receiver Bob so that he can decode a near perfect specimen of $\rho$. For many such states drawn iid from the ensemble, the asymptotically achievable rate is the number of qubits required to be transmitted per state. The Holevo information is a lower bound for the achievable rate, and is attained for pure state ensembles, or in the related scenario of entanglement-assisted visible compression of mixed states wherein Alice knows what state is drawn. In this paper, we prove a general and robust lower bound on the achievable rate for ensembles of classical states, which holds even in the least demanding setting when Alice and Bob share free entanglement and a constant per-copy error is allowed. We apply the bound to a specific ensemble of only two states and prove a near-maximal separation (saturating the dimension bound in leading order) between the best achievable rate and the Holevo information for constant error. This also implies that the ensemble is incompressible -- compression does not reduce the communication cost by much. Since the states are classical, the observed incompressibility is not fundamentally quantum mechanical. We lower bound the difference between the achievable rate and the Holevo information in terms of quantitative limitations to clone the specimen or to distinguish the two classical states.

We introduce the \textit{generalized join the shortest queue model with retrials} and two infinite capacity orbit queues. Three independent Poisson streams of jobs, namely a \textit{smart}, and two \textit{dedicated} streams, flow into a single server system, which can hold at most one job. Arriving jobs that find the server occupied are routed to the orbits as follows: Blocked jobs from the \textit{smart} stream are routed to the shortest orbit queue, and in case of a tie, they choose an orbit randomly. Blocked jobs from the \textit{dedicated} streams are routed directly to their orbits. Orbiting jobs retry to connect with the server at different retrial rates, i.e., heterogeneous orbit queues. Applications of such a system are found in the modelling of wireless cooperative networks. We are interested in the asymptotic behaviour of the stationary distribution of this model, provided that the system is stable. More precisely, we investigate the conditions under which the tail asymptotic of the minimum orbit queue length is exactly geometric. Moreover, we apply a heuristic asymptotic approach to obtain approximations of the steady-state joint orbit queue-length distribution. Useful numerical examples are presented, and shown that the results obtained through the asymptotic analysis and the heuristic approach agreed.

We propose an efficient, accurate and robust IMEX solver for the compressible Navier-Stokes equation with general equation of state. The method, which is based on an $h-$adaptive Discontinuos Galerkin spatial discretization and on an Additive Runge Kutta IMEX method for time discretization, is tailored for low Mach number applications and allows to simulate low Mach regimes at a significantly reduced computational cost, while maintaining full second order accuracy also for higher Mach number regimes. The method has been implemented in the framework of the deal.II numerical library, whose adaptive mesh refinement capabilities are employed to enhance efficiency. Refinement indicators appropriate for real gas phenomena have been introduced. A number of numerical experiments on classical benchmarks for compressible flows and their extension to real gases demonstrate the properties of the proposed method.

In this letter, we revisit the IEQ method and provide a new perspective on its ability to preserve the original energy dissipation laws. The invariant energy quadratization (IEQ) method has been widely used to design energy stable numerical schemes for phase-field or gradient flow models. Although there are many merits of the IEQ method, one major disadvantage is that the IEQ method usually respects a modified energy law, where the modified energy is expressed in the auxiliary variables. Still, the dissipation laws in terms of the original energy are not guaranteed. Using the widely-used Cahn-Hilliard equation as an example, we demonstrate that the Runge-Kutta IEQ method indeed can preserve the original energy dissipation laws for certain situations up to arbitrary high-order accuracy. Interested readers are highly encouraged to apply our idea to other phase-field equations or gradient flow models.

This paper is concerned with the lossy compression of general random variables, specifically with rate-distortion theory and quantization of random variables taking values in general measurable spaces such as, e.g., manifolds and fractal sets. Manifold structures are prevalent in data science, e.g., in compressed sensing, machine learning, image processing, and handwritten digit recognition. Fractal sets find application in image compression and in the modeling of Ethernet traffic. Our main contributions are bounds on the rate-distortion function and the quantization error. These bounds are very general and essentially only require the existence of reference measures satisfying certain regularity conditions in terms of small ball probabilities. To illustrate the wide applicability of our results, we particularize them to random variables taking values in i) manifolds, namely, hyperspheres and Grassmannians, and ii) self-similar sets characterized by iterated function systems satisfying the weak separation property.

We present a family of discretizations for the Variable Eddington Factor (VEF) equations that have high-order accuracy on curved meshes and efficient preconditioned iterative solvers. The VEF discretizations are combined with a high-order Discontinuous Galerkin transport discretization to form an effective high-order, linear transport method. The VEF discretizations are derived by extending the unified analysis of Discontinuous Galerkin methods for elliptic problems to the VEF equations. This framework is used to define analogs of the interior penalty, second method of Bassi and Rebay, minimal dissipation local Discontinuous Galerkin, and continuous finite element methods. The analysis of subspace correction preconditioners, which use a continuous operator to iteratively precondition the discontinuous discretization, is extended to the case of the non-symmetric VEF system. Numerical results demonstrate that the VEF discretizations have arbitrary-order accuracy on curved meshes, preserve the thick diffusion limit, and are effective on a proxy problem from thermal radiative transfer in both outer transport iterations and inner preconditioned linear solver iterations. In addition, a parallel weak scaling study of the interior penalty VEF discretization demonstrates the scalability of the method out to 1152 processors.

Joint modeling of a large number of variables often requires dimension reduction strategies that lead to structural assumptions of the underlying correlation matrix, such as equal pair-wise correlations within subsets of variables. The underlying correlation matrix is thus of interest for both model specification and model validation. In this paper, we develop tests of the hypothesis that the entries of the Kendall rank correlation matrix are linear combinations of a smaller number of parameters. The asymptotic behavior of the proposed test statistics is investigated both when the dimension is fixed and when it grows with the sample size. We pay special attention to the restricted hypothesis of partial exchangeability, which contains full exchangeability as a special case. We show that under partial exchangeability, the test statistics and their large-sample distributions simplify, which leads to computational advantages and better performance of the tests. We propose various scalable numerical strategies for implementation of the proposed procedures, investigate their behavior through simulations and power calculations under local alternatives, and demonstrate their use on a real dataset of mean sea levels at various geographical locations.

Most of the existing neural video compression methods adopt the predictive coding framework, which first generates the predicted frame and then encodes its residue with the current frame. However, as for compression ratio, predictive coding is only a sub-optimal solution as it uses simple subtraction operation to remove the redundancy across frames. In this paper, we propose a deep contextual video compression framework to enable a paradigm shift from predictive coding to conditional coding. In particular, we try to answer the following questions: how to define, use, and learn condition under a deep video compression framework. To tap the potential of conditional coding, we propose using feature domain context as condition. This enables us to leverage the high dimension context to carry rich information to both the encoder and the decoder, which helps reconstruct the high-frequency contents for higher video quality. Our framework is also extensible, in which the condition can be flexibly designed. Experiments show that our method can significantly outperform the previous state-of-the-art (SOTA) deep video compression methods. When compared with x265 using veryslow preset, we can achieve 26.0% bitrate saving for 1080P standard test videos.

Pre-trained deep neural network language models such as ELMo, GPT, BERT and XLNet have recently achieved state-of-the-art performance on a variety of language understanding tasks. However, their size makes them impractical for a number of scenarios, especially on mobile and edge devices. In particular, the input word embedding matrix accounts for a significant proportion of the model's memory footprint, due to the large input vocabulary and embedding dimensions. Knowledge distillation techniques have had success at compressing large neural network models, but they are ineffective at yielding student models with vocabularies different from the original teacher models. We introduce a novel knowledge distillation technique for training a student model with a significantly smaller vocabulary as well as lower embedding and hidden state dimensions. Specifically, we employ a dual-training mechanism that trains the teacher and student models simultaneously to obtain optimal word embeddings for the student vocabulary. We combine this approach with learning shared projection matrices that transfer layer-wise knowledge from the teacher model to the student model. Our method is able to compress the BERT_BASE model by more than 60x, with only a minor drop in downstream task metrics, resulting in a language model with a footprint of under 7MB. Experimental results also demonstrate higher compression efficiency and accuracy when compared with other state-of-the-art compression techniques.

北京阿比特科技有限公司