亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The insertion-deletion codes was motivated to correct the synchronization errors. In this paper we prove several Singleton type upper bounds on the insdel distances of linear insertion-deletion codes, based on the generalized Hamming weights and the formation of minimum Hamming weight codewords. Our bound are stronger than some previous known bounds. Some or our upper bounds are valid for any fixed ordering of coordinate positions. We apply these upper bounds to some binary cyclic codes with any rearrangement of coordinate positions, binary Reed-Muller codes and one algebraic-geometric code from elliptic curves.

相關內容

Algebraic methods for the design of series of maximum distance separable (MDS) linear block and convolutional codes to required specifications and types are presented. Algorithms are given to design codes to required rate and required error-correcting capability and required types. Infinite series of block codes with rate approaching a given rational $R$ with $0<R<1$ and relative distance over length approaching $(1-R)$ are designed. These can be designed over fields of given characteristic $p$ or over fields of prime order and can be specified to be of a particular type such as (i) dual-containing under Euclidean inner product, (ii) dual-containing under Hermitian inner product, (iii) quantum error-correcting, (iv) linear complementary dual (LCD). Convolutional codes to required rate and distance are designed and infinite series of convolutional codes with rate approaching a given rational $R$ and distance over length approaching $2(1-R)$. Properties, including distances, are shown algebraically and algebraic explicit efficient decoding methods are known.

The list-decodable code has been an active topic in theoretical computer science since the seminal papers of M. Sudan and V. Guruswami in 1997-1998. List-decodable codes are also considered in rank-metric, subspace metric, cover-metric, pair metric and insdel metric settings. In this paper we show that rates, list-decodable radius and list sizes are closely related to the classical topic of covering codes. We prove new general simple but strong upper bounds for list-decodable codes in general finite metric spaces based on various covering codes of finite metric spaces. The general covering code upper bounds can apply to the case when the volumes of the balls depend on the centers, not only on the radius case. Then any good upper bound on the covering radius or the size of covering code imply a good upper bound on the size of list-decodable codes.Our results give exponential improvements on the recent generalized Singleton upper bound in STOC 2020 for Hamming metric list-decodable codes, when the code lengths are large. Even for the list size $L=1$ case our covering code upper bounds give highly non-trivial upper bounds on the sizes of codes with the given minimum distance.The generalized Singleton upper bound for average-radius list-decodable codes is given. The asymptotic forms of covering code bounds can partially recover the Blinovsky bound and the combinatorial bound of Guruswami-H{\aa}stad-Sudan-Zuckerman in Hamming metric setting. We also suggest to study the combinatorial covering list-decodable codes as a natural generalization of combinatorial list-decodable codes. We apply our general covering code upper bounds for list-decodable rank-metric codes, list-decodable subspace codes, list-decodable insertion codes and list-decodable deletion codes. Some new better results about non-list-decodability of rank-metric codes and subspace codes are obtained.

We consider a variant of the clustering problem for a complete weighted graph. The aim is to partition the nodes into clusters maximizing the sum of the edge weights within the clusters. This problem is known as the clique partitioning problem, being NP-hard in the general case of having edge weights of different signs. We propose a new method of estimating an upper bound of the objective function that we combine with the classical branch-and-bound technique to find the exact solution. We evaluate our approach on a broad range of random graphs and real-world networks. The proposed approach provided tighter upper bounds and achieved significant convergence speed improvements compared to known alternative methods.

We study codes with parameters of $q$-ary shortened Hamming codes, i.e., $(n=(q^m-q)/(q-1), q^{n-m}, 3)_q$. At first, we prove the fact mentioned in [A.E.Brouwer et al. Bounds on mixed binary/ternary codes. IEEE Trans. Inf. Theory 44 (1998) 140-161] that such codes are optimal, generalizing it to a bound for multifold packings of radius-$1$ balls, with a corollary for multiple coverings. In particular, we show that the punctured Hamming code is an optimal $q$-fold packing with minimum distance $2$. At second, we show the existence of $4$-ary codes with parameters of shortened $1$-perfect codes that cannot be obtained by shortening a $1$-perfect code. Keywords: Hamming graph; multifold packings; multiple coverings; perfect codes.

This paper presents a new and unified approach to the derivation and analysis of many existing, as well as new discontinuous Galerkin methods for linear elasticity problems. The analysis is based on a unified discrete formulation for the linear elasticity problem consisting of four discretization variables: strong symmetric stress tensor $\dsig$ and displacement $\du$ inside each element, and the modifications of these two variables $\hsig$ and $\hu$ on elementary boundaries of elements. Motivated by many relevant methods in the literature, this formulation can be used to derive most existing discontinuous, nonconforming and conforming Galerkin methods for linear elasticity problems and especially to develop a number of new discontinuous Galerkin methods. Many special cases of this four-field formulation are proved to be hybridizable and can be reduced to some known hybridizable discontinuous Galerkin, weak Galerkin and local discontinuous Galerkin methods by eliminating one or two of the four fields. As certain stabilization parameter tends to zero, this four-field formulation is proved to converge to some conforming and nonconforming mixed methods for linear elasticity problems. Two families of inf-sup conditions, one known as $H^1$-based and the other known as $H({\rm div})$-based, are proved to be uniformly valid with respect to different choices of discrete spaces and parameters. These inf-sup conditions guarantee the well-posedness of the new proposed methods and also offer a new and unified analysis for many existing methods in the literature as a by-product. Some numerical examples are provided to verify the theoretical analysis including the optimal convergence of the new proposed methods.

We deal with the shape reconstruction of inclusions in elastic bodies. For solving this inverse problem in practice, data fitting functionals are used. Those work better than the rigorous monotonicity methods from [5], but have no rigorously proven convergence theory. Therefore we show how the monotonicity methods can be converted into a regularization method for a data-fitting functional without losing the convergence properties of the monotonicity methods. This is a great advantage and a significant improvement over standard regularization techniques. In more detail, we introduce constraints on the minimization problem of the residual based on the monotonicity methods and prove the existence and uniqueness of a minimizer as well as the convergence of the method for noisy data. In addition, we compare numerical reconstructions of inclusions based on the monotonicity-based regularization with a standard approach (one-step linearization with Tikhonov-like regularization), which also shows the robustness of our method regarding noise in practice.

We argue that proven exponential upper bounds on runtimes, an established area in classic algorithms, are interesting also in heuristic search and we prove several such results. We show that any of the algorithms randomized local search, Metropolis algorithm, simulated annealing, and (1+1) evolutionary algorithm can optimize any pseudo-Boolean weakly monotonic function under a large set of noise assumptions in a runtime that is at most exponential in the problem dimension~$n$. This drastically extends a previous such result, limited to the (1+1) EA, the LeadingOnes function, and one-bit or bit-wise prior noise with noise probability at most $1/2$, and at the same time simplifies its proof. With the same general argument, among others, we also derive a sub-exponential upper bound for the runtime of the $(1,\lambda)$ evolutionary algorithm on the OneMax problem when the offspring population size $\lambda$ is logarithmic, but below the efficiency threshold. To show that our approach can also deal with non-trivial parent population sizes, we prove an exponential upper bound for the runtime of the mutation-based version of the simple genetic algorithm on the OneMax benchmark, matching a known exponential lower bound.

An interesting observation in artificial neural networks is their favorable generalization error despite typically being extremely overparameterized. It is well known that the classical statistical learning methods often result in vacuous generalization errors in the case of overparameterized neural networks. Adopting the recently developed Neural Tangent (NT) kernel theory, we prove uniform generalization bounds for overparameterized neural networks in kernel regimes, when the true data generating model belongs to the reproducing kernel Hilbert space (RKHS) corresponding to the NT kernel. Importantly, our bounds capture the exact error rates depending on the differentiability of the activation functions. In order to establish these bounds, we propose the information gain of the NT kernel as a measure of complexity of the learning problem. Our analysis uses a Mercer decomposition of the NT kernel in the basis of spherical harmonics and the decay rate of the corresponding eigenvalues. As a byproduct of our results, we show the equivalence between the RKHS corresponding to the NT kernel and its counterpart corresponding to the Mat\'ern family of kernels, showing the NT kernels induce a very general class of models. We further discuss the implications of our analysis for some recent results on the regret bounds for reinforcement learning and bandit algorithms, which use overparameterized neural networks.

This article discusses the security of McEliece-like encryption schemes using subspace subcodes of Reed-Solomon codes, i.e. subcodes of Reed-Solomon codes over $\mathbb{F}_{q^m}$ whose entries lie in a fixed collection of $\mathbb{F}_q$-subspaces of $\mathbb{F}_{q^m}$. These codes appear to be a natural generalisation of Goppa and alternant codes and provide a broader flexibility in designing code based encryption schemes. For the security analysis, we introduce a new operation on codes called the twisted product which yields a polynomial time distinguisher on such subspace subcodes as soon as the chosen $\mathbb{F}_q$-subspaces have dimension larger than $m/2$. From this distinguisher, we build an efficient attack which in particular breaks some parameters of a recent proposal due to Khathuria, Rosenthal and Weger.

Since their introduction in Abadie and Gardeazabal (2003), Synthetic Control (SC) methods have quickly become one of the leading methods for estimating causal effects in observational studies in settings with panel data. Formal discussions often motivate SC methods by the assumption that the potential outcomes were generated by a factor model. Here we study SC methods from a design-based perspective, assuming a model for the selection of the treated unit(s) and period(s). We show that the standard SC estimator is generally biased under random assignment. We propose a Modified Unbiased Synthetic Control (MUSC) estimator that guarantees unbiasedness under random assignment and derive its exact, randomization-based, finite-sample variance. We also propose an unbiased estimator for this variance. We document in settings with real data that under random assignment, SC-type estimators can have root mean-squared errors that are substantially lower than that of other common estimators. We show that such an improvement is weakly guaranteed if the treated period is similar to the other periods, for example, if the treated period was randomly selected.

北京阿比特科技有限公司