亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The hull of a linear code over finite fields is the intersection of the code and its dual, and linear codes with small hulls have applications in computational complexity and information protection. Linear codes with the smallest hull are LCD codes, which have been widely studied. Recently, several papers were devoted to related LCD codes over finite fields with size greater than 3 to linear codes with one-dimensional or higher dimensional hull. Therefore, an interesting and non-trivial problem is to study binary linear codes with one-dimensional hull with connection to binary LCD codes. The objective of this paper is to study some properties of binary linear codes with one-dimensional hull, and establish their relation with binary LCD codes. Some interesting inequalities are thus obtained. Using such a characterization, we study the largest minimum distance $d_{one}(n,k)$ among all binary linear $[n,k]$ codes with one-dimensional hull. We determine the largest minimum distances $d_{one}(n,n-k)$ for $ k\leq 5$ and $d_{one}(n,k)$ for $k\leq 4$ or $14\leq n\leq 24$. We partially determine the exact value of $d_{one}(n,k)$ for $k=5$ or $25\leq n\leq 30$.

相關內容

Let $G=(V,E)$ be an $n$-vertex connected graph of maximum degree $\Delta$. Given access to $V$ and an oracle that given two vertices $u,v\in V$, returns the shortest path distance between $u$ and $v$, how many queries are needed to reconstruct $E$? We give a simple deterministic algorithm to reconstruct trees using $\Delta n\log_\Delta n+(\Delta+2)n$ distance queries and show that even randomised algorithms need to use at least $\frac1{200} \Delta n\log_\Delta n$ queries in expectation. The best previous lower bound was an information-theoretic lower bound of $\Omega(n\log n/\log \log n)$. Our lower bound also extends to related query models including distance queries for phylogenetic trees, membership queries for learning partitions and path queries in directed trees. We extend our deterministic algorithm to reconstruct graphs without induced cycles of length at least $k$ using $O_{\Delta,k}(n\log n)$ queries, which includes various graph classes of interest such as chordal graphs, permutation graphs and AT-free graphs. Since the previously best known randomised algorithm for chordal graphs uses $O_{\Delta}(n\log^2 n)$ queries in expectation, we both get rid off the randomness and get the optimal dependency in $n$ for chordal graphs and various other graph classes. Finally, we build on an algorithm of Kannan, Mathieu, and Zhou [ICALP, 2015] to give a randomised algorithm for reconstructing graphs of treelength $k$ using $O_{\Delta,k}(n\log^2n)$ queries in expectation.

Modern datasets are trending towards ever higher dimension. In response, recent theoretical studies of covariance estimation often assume the proportional-growth asymptotic framework, where the sample size $n$ and dimension $p$ are comparable, with $n, p \rightarrow \infty $ and $\gamma_n = p/n \rightarrow \gamma > 0$. Yet, many datasets -- perhaps most -- have very different numbers of rows and columns. We consider instead the disproportional-growth asymptotic framework, where $n, p \rightarrow \infty$ and $\gamma_n \rightarrow 0$ or $\gamma_n \rightarrow \infty$. Either disproportional limit induces novel behavior unseen within previous proportional and fixed-$p$ analyses. We study the spiked covariance model, with theoretical covariance a low-rank perturbation of the identity. For each of 15 different loss functions, we exhibit in closed form new optimal shrinkage and thresholding rules. Our optimal procedures demand extensive eigenvalue shrinkage and offer substantial performance benefits over the standard empirical covariance estimator. Practitioners may ask whether to view their data as arising within (and apply the procedures of) the proportional or disproportional frameworks. Conveniently, it is possible to remain {\it framework agnostic}: one unified set of closed-form shrinkage rules (depending only on the aspect ratio $\gamma_n$ of the given data) offers full asymptotic optimality under either framework. At the heart of the phenomena we explore is the spiked Wigner model, in which a low-rank matrix is perturbed by symmetric noise. Exploiting a connection to the spiked covariance model as $\gamma_n \rightarrow 0$, we derive optimal eigenvalue shrinkage rules for estimation of the low-rank component, of independent and fundamental interest.

In this paper, we establish the central limit theorem (CLT) for linear spectral statistics (LSS) of large-dimensional sample covariance matrix when the population covariance matrices are not uniformly bounded. This constitutes a nontrivial extension of the Bai-Silverstein theorem (BST) (Ann Probab 32(1):553--605, 2004), a theorem that has strongly influenced the development of high-dimensional statistics, especially in the applications of random matrix theory to statistics. Recently there has been a growing realization that the assumption of uniform boundedness of the population covariance matrices in BST is not satisfied in some fields, such as economics, where the variances of principal components could diverge as the dimension tends to infinity. Therefore, in this paper, we aim to eliminate the obstacles to the applications of BST. Our new CLT accommodates the spiked eigenvalues, which may either be bounded or tend to infinity. A distinguishing feature of our result is that the variance in the new CLT is related to both spiked eigenvalues and bulk eigenvalues, with dominance being determined by the divergence rate of the largest spiked eigenvalue. The new CLT for LSS is then applied to test the hypothesis that the population covariance matrix is the identity matrix or a generalized spiked model. The asymptotic distributions for the corrected likelihood ratio test statistic and corrected Nagao's trace test statistic are derived under the alternative hypothesis. Moreover, we provide power comparisons between the two LSSs and Roy's largest root test under certain hypotheses. In particular, we demonstrate that except for the case where the number of spikes is equal to 1, the LSSs may exhibit higher power than Roy's largest root test in certain scenarios.

In observational studies, unobserved confounding is a major barrier in isolating the average causal effect (ACE). In these scenarios, two main approaches are often used: confounder adjustment for causality (CAC) and instrumental variable analysis for causation (IVAC). Nevertheless, both are subject to untestable assumptions and, therefore, it may be unclear which assumption violation scenarios one method is superior in terms of mitigating inconsistency for the ACE. Although general guidelines exist, direct theoretical comparisons of the trade-offs between CAC and the IVAC assumptions are limited. Using ordinary least squares (OLS) for CAC and two-stage least squares (2SLS) for IVAC, we analytically compare the relative inconsistency for the ACE of each approach under a variety of assumption violation scenarios and discuss rules of thumb for practice. Additionally, a sensitivity framework is proposed to guide analysts in determining which approach may result in less inconsistency for estimating the ACE with a given dataset. We demonstrate our findings both through simulation and an application examining whether maternal stress during pregnancy affects a neonate's birthweight. The implications of our findings for causal inference practice are discussed, providing guidance for analysts for judging whether CAC or IVAC may be more appropriate for a given situation.

The stochastic partial differential equation (SPDE) approach is widely used for modeling large spatial datasets. It is based on representing a Gaussian random field $u$ on $\mathbb{R}^d$ as the solution of an elliptic SPDE $L^\beta u = \mathcal{W}$ where $L$ is a second-order differential operator, $2\beta$ (belongs to natural number starting from 1) is a positive parameter that controls the smoothness of $u$ and $\mathcal{W}$ is Gaussian white noise. A few approaches have been suggested in the literature to extend the approach to allow for any smoothness parameter satisfying $\beta>d/4$. Even though those approaches work well for simulating SPDEs with general smoothness, they are less suitable for Bayesian inference since they do not provide approximations which are Gaussian Markov random fields (GMRFs) as in the original SPDE approach. We address this issue by proposing a new method based on approximating the covariance operator $L^{-2\beta}$ of the Gaussian field $u$ by a finite element method combined with a rational approximation of the fractional power. This results in a numerically stable GMRF approximation which can be combined with the integrated nested Laplace approximation (INLA) method for fast Bayesian inference. A rigorous convergence analysis of the method is performed and the accuracy of the method is investigated with simulated data. Finally, we illustrate the approach and corresponding implementation in the R package rSPDE via an application to precipitation data which is analyzed by combining the rSPDE package with the R-INLA software for full Bayesian inference.

Thanks to the rapid progress and growing complexity of quantum algorithms, correctness of quantum programs has become a major concern. Pioneering research over the past years has proposed various approaches to formally verify quantum programs using proof systems such as quantum Hoare logic. All these prior approaches are post-hoc: one first implements a complete program and only then verifies its correctness. In this work, we propose Quantum Correctness by Construction (QbC): an approach to constructing quantum programs from their specification in a way that ensures correctness. We use pre- and postconditions to specify program properties, and propose a set of refinement rules to construct correct programs in a quantum while language. We validate QbC by constructing quantum programs for two idiomatic problems, teleportation and search, from their specification. We find that the approach naturally suggests how to derive program details, highlighting key design choices along the way. As such, we believe that QbC can play an important role in supporting the design and taxonomization of quantum algorithms and software.

We are interested in creating statistical methods to provide informative summaries of random fields through the geometry of their excursion sets. To this end, we introduce an estimator for the length of the perimeter of excursion sets of random fields on $\mathbb{R}^2$ observed over regular square tilings. The proposed estimator acts on the empirically accessible binary digital images of the excursion regions and computes the length of a piecewise linear approximation of the excursion boundary. The estimator is shown to be consistent as the pixel size decreases, without the need of any normalization constant, and with neither assumption of Gaussianity nor isotropy imposed on the underlying random field. In this general framework, even when the domain grows to cover $\mathbb{R}^2$, the estimation error is shown to be of smaller order than the side length of the domain. For affine, strongly mixing random fields, this translates to a multivariate Central Limit Theorem for our estimator when multiple levels are considered simultaneously. Finally, we conduct several numerical studies to investigate statistical properties of the proposed estimator in the finite-sample data setting.

Linear inverse problems arise in diverse engineering fields especially in signal and image reconstruction. The development of computational methods for linear inverse problems with sparsity is one of the recent trends in this field. The so-called optimal $k$-thresholding is a newly introduced method for sparse optimization and linear inverse problems. Compared to other sparsity-aware algorithms, the advantage of optimal $k$-thresholding method lies in that it performs thresholding and error metric reduction simultaneously and thus works stably and robustly for solving medium-sized linear inverse problems. However, the runtime of this method is generally high when the size of the problem is large. The purpose of this paper is to propose an acceleration strategy for this method. Specifically, we propose a heavy-ball-based optimal $k$-thresholding (HBOT) algorithm and its relaxed variants for sparse linear inverse problems. The convergence of these algorithms is shown under the restricted isometry property. In addition, the numerical performance of the heavy-ball-based relaxed optimal $k$-thresholding pursuit (HBROTP) has been evaluated, and simulations indicate that HBROTP admits robustness for signal and image reconstruction even in noisy environments.

Splines over triangulations and splines over quadrangulations (tensor product splines) are two common ways to extend bivariate polynomials to splines. However, combination of both approaches leads to splines defined over mixed triangle and quadrilateral meshes using the isogeometric approach. Mixed meshes are especially useful for representing complicated geometries obtained e.g. from trimming. As (bi-)linearly parameterized mesh elements are not flexible enough to cover smooth domains, we focus in this work on the case of planar mixed meshes parameterized by (bi-)quadratic geometry mappings. In particular we study in detail the space of $C^1$-smooth isogeometric spline functions of general polynomial degree over two such mixed mesh elements. We present the theoretical framework to analyze the smoothness conditions over the common interface for all possible configurations of mesh elements. This comprises the investigation of the dimension as well as the construction of a basis of the corresponding $C^1$-smooth isogeometric spline space over the domain described by two elements. Several examples of interest are presented in detail.

When is heterogeneity in the composition of an autonomous robotic team beneficial and when is it detrimental? We investigate and answer this question in the context of a minimally viable model that examines the role of heterogeneous speeds in perimeter defense problems, where defenders share a total allocated speed budget. We consider two distinct problem settings and develop strategies based on dynamic programming and on local interaction rules. We present a theoretical analysis of both approaches and our results are extensively validated using simulations. Interestingly, our results demonstrate that the viability of heterogeneous teams depends on the amount of information available to the defenders. Moreover, our results suggest a universality property: across a wide range of problem parameters the optimal ratio of the speeds of the defenders remains nearly constant.

北京阿比特科技有限公司