亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The intersection ${\bf C}\bigcap {\bf C}^{\perp}$ (${\bf C}\bigcap {\bf C}^{\perp_h}$) of a linear code ${\bf C}$ and its Euclidean dual ${\bf C}^{\perp}$ (Hermitian dual ${\bf C}^{\perp_h}$) is called the Euclidean (Hermitian) hull of this code. The construction of an entanglement-assisted quantum code from a linear code over ${\bf F}_q$ or ${\bf F}_{q^2}$ depends essentially on the Euclidean hull or the Hermitian hull of this code. Therefore it is natural to consider the hull-variation problem when a linear code ${\bf C}$ is transformed to an equivalent code ${\bf v} \cdot {\bf C}$. In this paper we introduce the maximal hull dimension as an invariant of a linear code with respect to the equivalent transformations. Then some basic properties of the maximal hull dimension are studied. A general method to construct hull-decreasing or hull-increasing equivalent linear codes is proposed. We prove that for a nonnegative integer $h$ satisfying $0 \leq h \leq n-1$, a linear $[2n, n]_q$ self-dual code is equivalent to a linear $h$-dimension hull code. On the opposite direction we prove that a linear LCD code over ${\bf F}_{2^s}$ satisfying $d\geq 2$ and $d^{\perp} \geq 2$ is equivalent to a linear one-dimension hull code under a weak condition. Several new families of negacyclic LCD codes and BCH LCD codes over ${\bf F}_3$ are also constructed. Our method can be applied to the generalized Reed-Solomon codes and the generalized twisted Reed-Solomon codes to construct arbitrary dimension hull MDS codes. Some new EAQEC codes including MDS and almost MDS entanglement-assisted quantum codes are constructed. Many EAQEC codes over small fields are constructed from optimal Hermitian self-dual codes.

相關內容

In an Markov decision process (MDP), unobservable confounders may exist and have impacts on the data generating process, so that the classic off-policy evaluation (OPE) estimators may fail to identify the true value function of the target policy. In this paper, we study the statistical properties of OPE in confounded MDPs with observable instrumental variables. Specifically, we propose a two-stage estimator based on the instrumental variables and establish its statistical properties in the confounded MDPs with a linear structure. For non-asymptotic analysis, we prove a $\mathcal{O}(n^{-1/2})$-error bound where $n$ is the number of samples. For asymptotic analysis, we prove that the two-stage estimator is asymptotically normal with a typical rate of $n^{1/2}$. To the best of our knowledge, we are the first to show such statistical results of the two-stage estimator for confounded linear MDPs via instrumental variables.

Subset sum is a very old and fundamental problem in theoretical computer science. In this problem, $n$ items with weights $w_1, w_2, w_3, \ldots, w_n$ are given as input and the goal is to find out if there is a subset of them whose weights sum up to a given value $t$. While the problem is NP-hard in general, when the values are non-negative integer, subset sum can be solved in pseudo-polynomial time $~\widetilde O(n+t)$. In this work, we consider the dynamic variant of subset sum. In this setting, an upper bound $\tmax$ is provided in advance to the algorithm and in each operation, either a new item is added to the problem or for a given integer value $t \leq \tmax$, the algorithm is required to output whether there is a subset of items whose sum of weights is equal to $t$. Unfortunately, none of the existing subset sum algorithms is able to process these operations in truly sublinear time\footnote{Truly sublinear means $n^{1-\Omega(1)}$.} in terms of $\tmax$. Our main contribution is an algorithm whose amortized processing time\footnote{Since the runtimes are amortized, we do not use separate terms update time and query time for different operations and use processing time for all types of operations.} for each operation is truly sublinear in $\tmax$ when the number of operations is at least $\tmax^{2/3+\Omega(1)}$. We also show that when both element addition and element removal are allowed, there is no algorithm that can process each operation in time $\tmax^{1-\Omega(1)}$ on average unless \textsf{SETH}\footnote{The \textit{strong exponential time hypothesis} states that no algorithm can solve the satisfiability problem in time $2^{n(1-\Omega(1))}$.} fails.

We propose a unified framework for likelihood-based regression modeling when the response variable has finite support. Our work is motivated by the fact that, in practice, observed data are discrete and bounded. The proposed methods assume a model which includes models previously considered for interval-censored variables with log-concave distributions as special cases. The resulting log-likelihood is concave, which we use to establish asymptotic normality of its maximizer as the number of observations $n$ tends to infinity with the number of parameters $d$ fixed, and rates of convergence of $L_1$-regularized estimators when the true parameter vector is sparse and $d$ and $n$ both tend to infinity with $\log(d) / n \to 0$. We consider an inexact proximal Newton algorithm for computing estimates and give theoretical guarantees for its convergence. The range of possible applications is wide, including but not limited to survival analysis in discrete time, the modeling of outcomes on scored surveys and questionnaires, and, more generally, interval-censored regression. The applicability and usefulness of the proposed methods are illustrated in simulations and data examples.

We study the asymptotic properties of the multivariate spike-and-slab LASSO (mSSL) proposed by Deshpande et al.(2019) for simultaneous variable and covariance selection. Specifically, we consider the sparse multivariate linear regression problem where $q$ correlated responses are regressed onto $p$ covariates. In this problem, the goal is to estimate a sparse matrix $B$ of marginal covariate effects and a sparse precision matrix $\Omega$, which captures the residual conditional dependence structure of the outcomes. The mSSL works by placing continuous spike and slab priors on all the entries of $B$ and on all the off-diagonal elements in the lower-triangle of $\Omega$. Under mild assumptions, we establish the posterior contraction rate for the slightly modified mSSL posterior in the asymptotic regime where both $p$ and $q$ diverge with $n.$ Our results imply that a slightly modified version of Deshpande et al.~(2019)'s mSSL procedure is asymptotically consistent.

Generalized linear mixed models (GLMM) are a popular tool to analyze clustered data, but when the number of clusters is small to moderate, standard statistical tests may produce elevated type I error rates. Small-sample corrections have been proposed to address this issue for continuous or binary outcomes without covariate adjustment. However, appropriate tests to use for count outcomes or under covariate-adjusted models remains unknown. An important setting in which this issue arises is in cluster-randomized trials (CRTs). Because many CRTs have just a few clusters (e.g., clinics or health systems), covariate adjustment is particularly critical to address potential chance imbalance and/or low power (e.g., adjustment following stratified randomization or for the baseline value of the outcome). We conducted simulations to evaluate GLMM-based tests of the treatment effect that account for the small (10) or moderate (20) number of clusters under a parallel-group CRT setting across scenarios of covariate adjustment (including adjustment for one or more person-level or cluster-level covariates) for both binary and count outcomes. We find that when the intraclass correlation is non-negligible ($\geq 0.01$) and the number of covariates is small ($\leq 2$), likelihood ratio tests with a between-within denominator degree of freedom have type I error rates close to the nominal level. When the number of covariates is moderate ($\geq 5$), across our simulation scenarios, the relative performance of the tests varied considerably and no method performed uniformly well. Therefore, we recommend adjusting for no more than a few covariates and using likelihood ratio tests with a between-within denominator degree of freedom.

The learning speed of feed-forward neural networks is notoriously slow and has presented a bottleneck in deep learning applications for several decades. For instance, gradient-based learning algorithms, which are used extensively to train neural networks, tend to work slowly when all of the network parameters must be iteratively tuned. To counter this, both researchers and practitioners have tried introducing randomness to reduce the learning requirement. Based on the original construction of Igelnik and Pao, single layer neural-networks with random input-to-hidden layer weights and biases have seen success in practice, but the necessary theoretical justification is lacking. In this paper, we begin to fill this theoretical gap. We provide a (corrected) rigorous proof that the Igelnik and Pao construction is a universal approximator for continuous functions on compact domains, with approximation error decaying asymptotically like $O(1/\sqrt{n})$ for the number $n$ of network nodes. We then extend this result to the non-asymptotic setting, proving that one can achieve any desired approximation error with high probability provided $n$ is sufficiently large. We further adapt this randomized neural network architecture to approximate functions on smooth, compact submanifolds of Euclidean space, providing theoretical guarantees in both the asymptotic and non-asymptotic forms. Finally, we illustrate our results on manifolds with numerical experiments.

A key challenge for a common waveform for Integrated Sensing and Communications (ISAC) - widely seen as an attractive proposition to achieve high performance for both functionalities, while efficiently utilizing available resources -- lies in leveraging information-bearing channel-coded communications signals (c.c.s) for sensing. In this paper, we investigate the sensing performance of c.c.s in (multi-user) interference-limited operation, and show that it is limited by sidelobes in the range-Doppler map, whose form depends on whether the c.c.s modulates a single-carrier or OFDM waveform. While uncoded communications signals -- comprising a block of $N$ i.i.d zero-mean symbols -- give rise to asymptotically (i.e., as $N \rightarrow \infty$) zero sidelobes due to the law of large numbers, it is not obvious that the same holds for c.c.s, as structured channel coding schemes (e.g., linear block codes) induce dependence across codeword symbols. In this paper, we show that c.c.s also give rise to asymptotically zero sidelobes -- for both single-carrier and OFDM waveforms -- by deriving upper bounds for the tail probabilities of the sidelobe magnitudes that decay as $\exp( - O($code rate $\times$ block length$))$. This implies that for any code rate, c.c.s are effective sensing signals that are robust to multi-user interference at sufficiently large block lengths, with negligible difference in performance based on whether they modulate a single-carrier or OFDM waveform. We verify the latter implication through simulations, where we observe the sensing performance (characterized by the detection and false-alarm probabilities) of a QPSK-modulated c.c.s (code rate = 120/1024, block length = 1024 symbols) to match that of a comparable interference-free FMCW waveform even at high interference levels (signal-to-interference ratio of -11dB), for both single-carrier and OFDM waveforms.

This article establishes novel strong uniform laws of large numbers for randomly weighted sums such as bootstrap means. By leveraging recent advances, these results extend previous work in their general applicability to a wide range of weighting procedures and in their flexibility with respect to the effective bootstrap sample size. In addition to the standard multinomial bootstrap and the $m$-out-of-$n$ bootstrap, our results apply to a large class of randomly weighted sums involving negatively orthant dependent (NOD) weights, including the Bayesian bootstrap, jackknife, resampling without replacement, simple random sampling with over-replacement, independent weights, and multivariate Gaussian weighting schemes. Weights are permitted to be non-identically distributed and possibly even negative. Our proof technique is based on extending a proof of the i.i.d.\ strong uniform law of large numbers to employ strong laws for randomly weighted sums; in particular, we exploit a recent Marcinkiewicz--Zygmund strong law for NOD weighted sums.

We study exact active learning of binary and multiclass classifiers with margin. Given an $n$-point set $X \subset \mathbb{R}^m$, we want to learn any unknown classifier on $X$ whose classes have finite strong convex hull margin, a new notion extending the SVM margin. In the standard active learning setting, where only label queries are allowed, learning a classifier with strong convex hull margin $\gamma$ requires in the worst case $\Omega\big(1+\frac{1}{\gamma}\big)^{(m-1)/2}$ queries. On the other hand, using the more powerful seed queries (a variant of equivalence queries), the target classifier could be learned in $O(m \log n)$ queries via Littlestone's Halving algorithm; however, Halving is computationally inefficient. In this work we show that, by carefully combining the two types of queries, a binary classifier can be learned in time $\operatorname{poly}(n+m)$ using only $O(m^2 \log n)$ label queries and $O\big(m \log \frac{m}{\gamma}\big)$ seed queries; the result extends to $k$-class classifiers at the price of a $k!k^2$ multiplicative overhead. Similar results hold when the input points have bounded bit complexity, or when only one class has strong convex hull margin against the rest. We complement the upper bounds by showing that in the worst case any algorithm needs $\Omega\big(k m \log \frac{1}{\gamma}\big)$ seed and label queries to learn a $k$-class classifier with strong convex hull margin $\gamma$.

Hoare-style program logics are a popular and effective technique for software verification. Relational program logics are an instance of this approach that enables reasoning about relationships between the execution of two or more programs. Existing relational program logics have focused on verifying that all runs of a collection of programs do not violate a specified relational behavior. Several important relational properties, including refinement and noninterference, do not fit into this category, as they also mandate the existence of specific desirable executions. This paper presents RHLE, a logic for verifying these sorts of relational $\forall\exists$ properties. Key to our approach is a novel form of function specification that employs a variant of ghost variables to ensure that valid implementations exhibit certain behaviors. We have used a program verifier based on RHLE to verify a diverse set of relational $\forall\exists$ properties drawn from the literature.

北京阿比特科技有限公司