亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Entanglement-assisted quantum error-correcting (EAQEC) codes are a generalization of standard stabilizer quantum error-correcting codes, which can be possibly constructed from any classical codes by relaxing self-orthogonal condition with the help of pre-shared entanglement between the sender and the receiver. In this paper, by using generalized Reed-Solomon codes, we construct two families of entanglement-assisted quantum error-correcting MDS (EAQMDS) codes with parameters $[[\frac{b({q^2}-1)}{a}+\frac{{q^2} - 1}{a}, \frac{b({q^2}-1)}{a}+\frac{{q^2}-1}{a}-2d+c+2,d;c]]_q$, where $q$ is a prime power and $a| (q+1)$. Among our constructions, the EAQMDS codes have much larger minimum distance than the known EAQMDS codes with the same length and consume the same number of ebits. Moreover, some of the lengths of ours EAQMDS codes may not be divisors of $q^2\pm 1$, which are new and different from all the previously known ones.

相關內容

Finite dynamical systems (FDSs) are commonly used to model systems with a finite number of states that evolve deterministically and at discrete time steps. Considered up to isomorphism, those correspond to functional graphs. As such, FDSs have a sum and product operation, which correspond to the direct sum and direct product of their respective graphs; the collection of FDSs endowed with these operations then forms a semiring. The algebraic structure of the product of FDSs is particularly interesting. For instance, an FDS can be factorised if and only if it is composed of two sub-systems running in parallel. In this work, we further the understanding of the factorisation, division, and root finding problems for FDSs. Firstly, an FDS $A$ is cancellative if one can divide by it unambiguously, i.e. $AX = AY$ implies $X = Y$. We prove that an FDS $A$ is cancellative if and only if it has a fixpoint. Secondly, we prove that if an FDS $A$ has a $k$-th root (i.e. $B$ such that $B^k = A$), then it is unique. Thirdly, unlike integers, the monoid of FDS product does not have unique factorisation into irreducibles. We instead exhibit a large class of monoids of FDSs with unique factorisation. To obtain our main results, we introduce the unrolling of an FDS, which can be viewed as a space-time expansion of the system. This allows us to work with (possibly infinite) trees, where the product is easier to handle than its counterpart for FDSs.

The Fr\'{e}chet distance is one of the most studied distance measures between curves $P$ and $Q$. The data structure variant of the problem is a longstanding open problem: Efficiently preprocess $P$, so that for any $Q$ given at query time, one can efficiently approximate their Fr\'{e}chet distance. There exist conditional lower bounds that prohibit $(1 + \varepsilon)$-approximate Fr\'{e}chet distance computations in subquadratic time, even when preprocessing $P$ using any polynomial amount of time and space. As a consequence, the problem has been studied under various restrictions: restricting $Q$ to be a (horizontal) segment, or requiring $P$ and $Q$ to be so-called \emph{realistic} input curves. We give a data structure for $(1+\varepsilon)$-approximate discrete Fr\'{e}chet distance in any metric space $\mathcal{X}$ between a realistic input curve $P$ and any query curve $Q$. After preprocessing the input curve $P$ (of length $|P|=n$) in $O(n \log n)$ time, we may answer queries specifying a query curve $Q$ and an $\varepsilon$, and output a value $d(P,Q)$ which is at most a $(1+\varepsilon)$-factor away from the true Fr\'{e}chet distance between $Q$ and $P$. Thus, we give the first data structure that adapts to $\varepsilon$-values specified at query time, and the first data structure to handle query curves with arbitrarily many vertices. Our query time is asymptotically linear in $|Q|=m$, $\frac{1}{\varepsilon}$, $\log n$, and the realism parameter $c$ or $\kappa$. The method presented in this paper simplifies and generalizes previous contributions to the static problem variant. We obtain efficient queries (and therefore static algorithms) for Fr\'{e}chet distance computation in high-dimensional spaces and other metric spaces (e.g., when $\mathcal{X}$ is a graph under the shortest path metric). Our method supports subcurve queries at no additional cost.

Vision Transformers (ViTs) outperforms convolutional neural networks (CNNs) in several vision tasks with its global modeling capabilities. However, ViT lacks the inductive bias inherent to convolution making it require a large amount of data for training. This results in ViT not performing as well as CNNs on small datasets like medicine and science. We experimentally found that masked autoencoders (MAE) can make the transformer focus more on the image itself, thus alleviating the data-hungry issue of ViT to some extent. Yet the current MAE model is too complex resulting in over-fitting problems on small datasets. This leads to a gap between MAEs trained on small datasets and advanced CNNs models still. Therefore, we investigated how to reduce the decoder complexity in MAE and found a more suitable architectural configuration for it with small datasets. Besides, we additionally designed a location prediction task and a contrastive learning task to introduce localization and invariance characteristics for MAE. Our contrastive learning task not only enables the model to learn high-level visual information but also allows the training of MAE's class token. This is something that most MAE improvement efforts do not consider. Extensive experiments have shown that our method shows state-of-the-art performance on standard small datasets as well as medical datasets with few samples compared to the current popular masked image modeling (MIM) and vision transformers for small datasets.The code and models are available at //github.com/Talented-Q/SDMAE.

Practitioners are often left tuning Metropolis-Hastings algorithms by trial and error or using optimal scaling guidelines to avoid poor empirical performance. We develop lower bounds on the convergence rates of geometrically ergodic accept-reject-based Markov chains (i.e. Metropolis-Hastings, non-reversible Metropolis-Hastings) to study their computational complexity. If the target density concentrates with a parameter $n$ (e.g. Bayesian posterior concentration, Laplace approximations), we show the convergence rate can tend to $1$ exponentially fast if the tuning parameters do not depend carefully on $n$. We show this is the case for random-walk Metropolis in Bayesian logistic regression with Zellner's g-prior when the dimension and sample size $d/n \to \gamma \in (0, 1)$. We focus on more general target densities using a special class of Metropolis-Hastings algorithms with a Gaussian proposal (e.g. random walk and Metropolis-adjusted Langevin algorithms) where we give more general conditions. An application to flat prior Bayesian logistic regression as $n \to \infty$ is studied. We also develop lower bounds in the Wasserstein distances which have become popular in the convergence analysis of high-dimensional MCMC algorithms with similar conclusions.

A graph $G$ is called self-ordered (a.k.a asymmetric) if the identity permutation is its only automorphism. Equivalently, there is a unique isomorphism from $G$ to any graph that is isomorphic to $G$. We say that $G=(V,E)$ is robustly self-ordered if the size of the symmetric difference between $E$ and the edge-set of the graph obtained by permuting $V$ using any permutation $\pi:V\to V$ is proportional to the number of non-fixed-points of $\pi$. In this work, we initiate the study of the structure, construction and utility of robustly self-ordered graphs. We show that robustly self-ordered bounded-degree graphs exist (in abundance), and that they can be constructed efficiently, in a strong sense. Specifically, given the index of a vertex in such a graph, it is possible to find all its neighbors in polynomial-time (i.e., in time that is poly-logarithmic in the size of the graph). We also consider graphs of unbounded degree, seeking correspondingly unbounded robustness parameters. We again demonstrate that such graphs (of linear degree) exist (in abundance), and that they can be constructed efficiently, in a strong sense. This turns out to require very different tools. Specifically, we show that the construction of such graphs reduces to the construction of non-malleable two-source extractors (with very weak parameters but with some additional natural features). We demonstrate that robustly self-ordered bounded-degree graphs are useful towards obtaining lower bounds on the query complexity of testing graph properties both in the bounded-degree and the dense graph models. One of the results that we obtain, via such a reduction, is a subexponential separation between the query complexities of testing and tolerant testing of graph properties in the bounded-degree graph model.

Federated learning allows multiple clients to collaboratively train a model without exchanging their data, thus preserving data privacy. Unfortunately, it suffers significant performance degradation under heterogeneous data at clients. Common solutions in local training involve designing a specific auxiliary loss to regularize weight divergence or feature inconsistency. However, we discover that these approaches fall short of the expected performance because they ignore the existence of a vicious cycle between classifier divergence and feature mapping inconsistency across clients, such that client models are updated in inconsistent feature space with diverged classifiers. We then propose a simple yet effective framework named Federated learning with Feature Anchors (FedFA) to align the feature mappings and calibrate classifier across clients during local training, which allows client models updating in a shared feature space with consistent classifiers. We demonstrate that this modification brings similar classifiers and a virtuous cycle between feature consistency and classifier similarity across clients. Extensive experiments show that FedFA significantly outperforms the state-of-the-art federated learning algorithms on various image classification datasets under label and feature distribution skews.

Vision Transformers (ViTs) outperforms convolutional neural networks (CNNs) in several vision tasks with its global modeling capabilities. However, ViT lacks the inductive bias inherent to convolution making it require a large amount of data for training. This results in ViT not performing as well as CNNs on small datasets like medicine and science. We experimentally found that masked autoencoders (MAE) can make the transformer focus more on the image itself, thus alleviating the data-hungry issue of ViT to some extent. Yet the current MAE model is too complex resulting in over-fitting problems on small datasets. This leads to a gap between MAEs trained on small datasets and advanced CNNs models still. Therefore, we investigated how to reduce the decoder complexity in MAE and found a more suitable architectural configuration for it with small datasets. Besides, we additionally designed a location prediction task and a contrastive learning task to introduce localization and invariance characteristics for MAE. Our contrastive learning task not only enables the model to learn high-level visual information but also allows the training of MAE's class token. This is something that most MAE improvement efforts do not consider. Extensive experiments have shown that our method shows state-of-the-art performance on standard small datasets as well as medical datasets with few samples compared to the current popular masked image modeling (MIM) and vision transformers for small datasets.The code and models are available at //github.com/Talented-Q/SDMAE.

In a mixed generalized linear model, the objective is to learn multiple signals from unlabeled observations: each sample comes from exactly one signal, but it is not known which one. We consider the prototypical problem of estimating two statistically independent signals in a mixed generalized linear model with Gaussian covariates. Spectral methods are a popular class of estimators which output the top two eigenvectors of a suitable data-dependent matrix. However, despite the wide applicability, their design is still obtained via heuristic considerations, and the number of samples $n$ needed to guarantee recovery is super-linear in the signal dimension $d$. In this paper, we develop exact asymptotics on spectral methods in the challenging proportional regime in which $n, d$ grow large and their ratio converges to a finite constant. By doing so, we are able to optimize the design of the spectral method, and combine it with a simple linear estimator, in order to minimize the estimation error. Our characterization exploits a mix of tools from random matrices, free probability and the theory of approximate message passing algorithms. Numerical simulations for mixed linear regression and phase retrieval display the advantage enabled by our analysis over existing designs of spectral methods.

This paper surveys and organizes research works in a new paradigm in natural language processing, which we dub "prompt-based learning". Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y|x), prompt-based learning is based on language models that model the probability of text directly. To use these models to perform prediction tasks, the original input x is modified using a template into a textual string prompt x' that has some unfilled slots, and then the language model is used to probabilistically fill the unfilled information to obtain a final string x, from which the final output y can be derived. This framework is powerful and attractive for a number of reasons: it allows the language model to be pre-trained on massive amounts of raw text, and by defining a new prompting function the model is able to perform few-shot or even zero-shot learning, adapting to new scenarios with few or no labeled data. In this paper we introduce the basics of this promising paradigm, describe a unified set of mathematical notations that can cover a wide variety of existing work, and organize existing work along several dimensions, e.g.the choice of pre-trained models, prompts, and tuning strategies. To make the field more accessible to interested beginners, we not only make a systematic review of existing works and a highly structured typology of prompt-based concepts, but also release other resources, e.g., a website //pretrain.nlpedia.ai/ including constantly-updated survey, and paperlist.

In structure learning, the output is generally a structure that is used as supervision information to achieve good performance. Considering the interpretation of deep learning models has raised extended attention these years, it will be beneficial if we can learn an interpretable structure from deep learning models. In this paper, we focus on Recurrent Neural Networks (RNNs) whose inner mechanism is still not clearly understood. We find that Finite State Automaton (FSA) that processes sequential data has more interpretable inner mechanism and can be learned from RNNs as the interpretable structure. We propose two methods to learn FSA from RNN based on two different clustering methods. We first give the graphical illustration of FSA for human beings to follow, which shows the interpretability. From the FSA's point of view, we then analyze how the performance of RNNs are affected by the number of gates, as well as the semantic meaning behind the transition of numerical hidden states. Our results suggest that RNNs with simple gated structure such as Minimal Gated Unit (MGU) is more desirable and the transitions in FSA leading to specific classification result are associated with corresponding words which are understandable by human beings.

北京阿比特科技有限公司