亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Galois inner product is a generalization of the Euclidean inner product and Hermitian inner product. The Galois hull of a linear code is the intersection of itself and its Galois dual code, which has aroused the interest of researchers in these years. In this paper, we study Galois hulls of linear codes. Firstly, the symmetry of the dimensions of Galois hulls is found. Some new necessary and sufficient conditions for linear codes being Galois self-orthogonal codes, Galois self-dual codes and Galois linear complementary dual codes are characterized. Then, based on these properties, we develop the previous theory and propose explicit methods to construct Galois self-orthogonal codes of lengths $n+2i$ ($i\geq 0$) and $n+2i+1$ ($i\geq 1$) from Galois self-orthogonal codes of length $n$. As applications, linear codes of lengths $n+2i$ and $n+2i+1$ with Galois hulls of arbitrary dimensions are derived immediately. After this, two new classes of Hermitian self-orthogonal MDS codes are also constructed. Finally, applying all the results to the constructions of entanglement-assisted quantum error-correcting codes (EAQECCs), many new EAQECCs and MDS EAQECCs with rates greater than or equal to $\frac{1}{2}$ and positive net rates can be obtained.

相關內容

Simulating propagation of acoustic waves via solving a system of three-coupled first-order linear differential equations using a k-space pseudo-spectral method is popular for biomedical applications, firstly because of availability of an open-source toolbox for implementation of this numerical approach, and secondly because of its efficiency. The k-space pseudo-spectral method is efficient, because it allows coarser computational grids and larger time steps than finite difference and finite element methods for the same accuracy. The goal of this study is to compare this numerical wave solver with an analytical solution to the wave equation using the Green's function for computing propagation of acoustic waves in homogeneous media. This comparison is done in the frequency domain. Using the k-Wave solver, a match to the Green's function is obtained after modifying the approach taken for including mass source in the linearised equation of continuity (conservation of mass) in the associated system of wave equations.

MITRE ATT&CK is a widespread ontology that specifies tactics, techniques, and procedures (TTPs) typical of malware behaviour, making it possible to exploit such TTPs for malware identification. However, this is far from being an easy task given that benign usage of software can also match some of these TTPs. In this paper, we present RADAR, a system that can identify malicious behaviour in network traffic in two stages: first, RADAR extracts MITRE ATT&CK TTPs from arbitrary network traffic captures, and, secondly, it deploys decision trees to differentiate between malicious and benign uses of the detected TTPs. In order to evaluate RADAR, we created a dataset comprising of 2,286,907 malicious and benign samples, for a total of 84,792,452 network flows. The experimental analysis confirms that RADAR is able to $(i)$ match samples to multiple different TTPs, and $(ii)$ effectively detect malware with an AUC score of 0.868. Beside being effective, RADAR is also highly configurable, interpretable, privacy preserving, efficient and can be easily integrated with existing security infrastructure to complement their capabilities.

A frequency $n$-cube $F^n(q;l_0,...,l_{m-1})$ is an $n$-dimensional $q$-by-...-by-$q$ array, where $q = l_0+...+l_{m-1}$, filled by numbers $0,...,m-1$ with the property that each line contains exactly $l_i$ cells with symbol $i$, $i = 0,...,m-1$ (a line consists of $q$ cells of the array differing in one coordinate). The trivial upper bound on the number of frequency $n$-cubes is $m^{(q-1)^{n}}$. We improve that lower bound for $n>2$, replacing $q-1$ by a smaller value, by constructing a testing set of size $s^{n}$, $s<q-1$, for frequency $n$-cubes (a testing sets is a collection of cells of an array the values in which uniquely determine the array with given parameters). We also construct new testing sets for generalized frequency $n$-cubes, which are essentially correlation immune functions in $n$ $q$-valued arguments; the cardinalities of new testing sets are smaller than for testing sets known before.

Relational verification encompasses information flow security, regression verification, translation validation for compilers, and more. Effective alignment of the programs and computations to be related facilitates use of simpler relational invariants and relational procedure specs, which in turn enables automation and modular reasoning. Alignment has been explored in terms of trace pairs, deductive rules of relational Hoare logics (RHL), and several forms of product automata. This article shows how a simple extension of Kleene Algebra with Tests (KAT), called BiKAT, subsumes prior formulations, including alignment witnesses for forall-exists properties, which brings to light new RHL-style rules for such properties. Alignments can be discovered algorithmically or devised manually but, in either case, their adequacy with respect to the original programs must be proved; an explicit algebra enables constructive proof by equational reasoning. Furthermore our approach inherits algorithmic benefits from existing KAT-based techniques and tools, which are applicable to a range of semantic models.

This paper presents three classes of metalinear structures that abstract some of the properties of Hilbert spaces. Those structures include a binary relation that expresses orthogonality between elements and enables the definition of an operation that generalizes the projection operation in Hilbert spaces. The logic defined by the most general class has a unitary connective and two dual binary connectives that are neither commutative nor associative. It is a substructural logic of sequents in which the Exchange rule is extremely limited and Weakening is also restricted. This provides a logic for quantum measurements whose proof theory is attractive. A completeness result is proved. An additional property of the binary relation ensures that the structure satisfies the MacLane-Steinitz exchange property and is some kind of matroid. Preliminary results on richer structures based on a sort of real inner product that generalizes the Born factor of Quantum Physics are also presented.

Let $q$ be a prime power and let $\mathcal{R}=\mathbb{F}_{q}[u_1,u_2, \cdots, u_k]/\langle f_i(u_i),u_iu_j-u_ju_i\rangle$ be a finite non-chain ring, where $f_i(u_i), 1\leq i \leq k$ are polynomials, not all linear, which split into distinct linear factors over $\mathbb{F}_{q}$. We characterize constacyclic codes over the ring $\mathcal{R}$ and study quantum codes from these. As an application, some new and better quantum codes, as compared to the best known codes, are obtained. We also prove that the choice of the polynomials $f_i(u_i),$ $1 \leq i \leq k$ is irrelevant while constructing quantum codes from constacyclic codes over $\mathcal{R}$, it depends only on their degrees. It is shown that there always exists Quantum MDS code $[[n,n-2,2]]_q$ for any $n$ with $\gcd (n,q)\neq 1.$

A locally testable code is an error-correcting code that admits very efficient probabilistic tests of membership. Tensor codes provide a simple family of combinatorial constructions of locally testable codes that generalize the family of Reed-Muller codes. The natural test for tensor codes, the axis-parallel line vs. point test, plays an essential role in constructions of probabilistically checkable proofs. We analyze the axis-parallel line vs. point test as a two-prover game and show that the test is sound against quantum provers sharing entanglement. Our result implies the quantum-soundness of the low individual degree test, which is an essential component of the MIP* = RE theorem. Our proof also generalizes to the infinite-dimensional commuting-operator model of quantum provers.

This book develops an effective theory approach to understanding deep neural networks of practical relevance. Beginning from a first-principles component-level picture of networks, we explain how to determine an accurate description of the output of trained networks by solving layer-to-layer iteration equations and nonlinear learning dynamics. A main result is that the predictions of networks are described by nearly-Gaussian distributions, with the depth-to-width aspect ratio of the network controlling the deviations from the infinite-width Gaussian description. We explain how these effectively-deep networks learn nontrivial representations from training and more broadly analyze the mechanism of representation learning for nonlinear models. From a nearly-kernel-methods perspective, we find that the dependence of such models' predictions on the underlying learning algorithm can be expressed in a simple and universal way. To obtain these results, we develop the notion of representation group flow (RG flow) to characterize the propagation of signals through the network. By tuning networks to criticality, we give a practical solution to the exploding and vanishing gradient problem. We further explain how RG flow leads to near-universal behavior and lets us categorize networks built from different activation functions into universality classes. Altogether, we show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks. By using information-theoretic techniques, we estimate the optimal aspect ratio at which we expect the network to be practically most useful and show how residual connections can be used to push this scale to arbitrary depths. With these tools, we can learn in detail about the inductive bias of architectures, hyperparameters, and optimizers.

The Q-learning algorithm is known to be affected by the maximization bias, i.e. the systematic overestimation of action values, an important issue that has recently received renewed attention. Double Q-learning has been proposed as an efficient algorithm to mitigate this bias. However, this comes at the price of an underestimation of action values, in addition to increased memory requirements and a slower convergence. In this paper, we introduce a new way to address the maximization bias in the form of a "self-correcting algorithm" for approximating the maximum of an expected value. Our method balances the overestimation of the single estimator used in conventional Q-learning and the underestimation of the double estimator used in Double Q-learning. Applying this strategy to Q-learning results in Self-correcting Q-learning. We show theoretically that this new algorithm enjoys the same convergence guarantees as Q-learning while being more accurate. Empirically, it performs better than Double Q-learning in domains with rewards of high variance, and it even attains faster convergence than Q-learning in domains with rewards of zero or low variance. These advantages transfer to a Deep Q Network implementation that we call Self-correcting DQN and which outperforms regular DQN and Double DQN on several tasks in the Atari 2600 domain.

With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.

北京阿比特科技有限公司