亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Frameproof codes are a class of secure codes that were originally introduced in the pioneering work of Boneh and Shaw in the context of digital fingerprinting. They can be used to enhance the security and credibility of digital content. Let $M_{c,l}(q)$ denote the largest cardinality of a $q$-ary $c$-frameproof code with length $l$. Based on an intriguing observation that relates $M_{c,l}(q)$ to the renowned Erd\H{o}s Matching Conjecture in extremal set theory, in 2003, Blackburn posed an open problem on the precise value of the limit $R_{c,l}=\lim_{q\rightarrow\infty}\frac{M_{c,l}(q)}{q^{\lceil l/c \rceil}}$. By combining several ideas from the probabilistic method, we present a lower bound for $M_{c,l}(q)$, which, together with an upper bound of Blackburn, completely determines $R_{c,l}$ for {\it all} fixed $c,l$, and resolves the above open problem in the full generality. We also present an improved upper bound for $M_{c,l}(q)$.

相關內容

This paper makes two contributions to the field of text-based patent similarity. First, it compares the performance of different kinds of patent-specific pretrained embedding models, namely static word embeddings (such as word2vec and doc2vec models) and contextual word embeddings (such as transformers based models), on the task of patent similarity calculation. Second, it compares specifically the performance of Sentence Transformers (SBERT) architectures with different training phases on the patent similarity task. To assess the models' performance, we use information about patent interferences, a phenomenon in which two or more patent claims belonging to different patent applications are proven to be overlapping by patent examiners. Therefore, we use these interferences cases as a proxy for maximum similarity between two patents, treating them as ground-truth to evaluate the performance of the different embedding models. Our results point out that, first, Patent SBERT-adapt-ub, the domain adaptation of the pretrained Sentence Transformer architecture proposed in this research, outperforms the current state-of-the-art in patent similarity. Second, they show that, in some cases, large static models performances are still comparable to contextual ones when trained on extensive data; thus, we believe that the superiority in the performance of contextual embeddings may not be related to the actual architecture but rather to the way the training phase is performed.

We address the problem of the best uniform approximation of a continuous function on a convex domain. The approximation is by linear combinations of a finite system of functions (not necessarily Chebyshev) under arbitrary linear constraints. By modifying the concept of alternance and of the Remez iterative procedure we present a method, which demonstrates its efficiency in numerical problems. The linear rate of convergence is proved under some favourable assumptions. A special attention is paid to systems of complex exponents, Gaussian functions, lacunar algebraic and trigonometric polynomials. Applications to signal processing, linear ODE, switching dynamical systems, and to Markov-Bernstein type inequalities are considered.

In the search for highly efficient decoders for short LDPC codes approaching maximum likelihood performance, a relayed decoding strategy, specifically activating the ordered statistics decoding process upon failure of a neural min-sum decoder, is enhanced by instilling three innovations. Firstly, soft information gathered at each step of the neural min-sum decoder is leveraged to forge a new reliability measure using a convolutional neural network. This measure aids in constructing the most reliable basis of ordered statistics decoding, bolstering the decoding process by excluding error-prone bits or concentrating them in a smaller area. Secondly, an adaptive ordered statistics decoding process is introduced, guided by a derived decoding path comprising prioritized blocks, each containing distinct test error patterns. The priority of these blocks is determined from the statistical data during the query phase. Furthermore, effective complexity management methods are devised by adjusting the decoding path's length or refining constraints on the involved blocks. Thirdly, a simple auxiliary criterion is introduced to reduce computational complexity by minimizing the number of candidate codewords before selecting the optimal estimate. Extensive experimental results and complexity analysis strongly support the proposed framework, demonstrating its advantages in terms of high throughput, low complexity, independence from noise variance, in addition to superior decoding performance.

Developing algorithms for accurate and comprehensive neural decoding of mental contents is one of the long-cherished goals in the field of neuroscience and brain-machine interfaces. Previous studies have demonstrated the feasibility of neural decoding by training machine learning models to map brain activity patterns into a semantic vector representation of stimuli. These vectors, hereafter referred as pretrained feature vectors, are usually derived from semantic spaces based solely on image and/or text features and therefore they might have a totally different characteristics than how visual stimuli is represented in the human brain, resulting in limiting the capability of brain decoders to learn this mapping. To address this issue, we propose a representation learning framework, termed brain-grounding of semantic vectors, which fine-tunes pretrained feature vectors to better align with the neural representation of visual stimuli in the human brain. We trained this model this model with functional magnetic resonance imaging (fMRI) of 150 different visual stimuli categories, and then performed zero-shot brain decoding and identification analyses on 1) fMRI and 2) magnetoencephalography (MEG). Interestingly, we observed that by using the brain-grounded vectors, the brain decoding and identification accuracy on brain data from different neuroimaging modalities increases. These findings underscore the potential of incorporating a richer array of brain-derived features to enhance performance of brain decoding algorithms.

Firstly studied by Kempa and Prezza in 2018 as the cement of text compression algorithms, string attractors have become a compelling object of theoretical research within the community of combinatorics on words. In this context, they have been studied for several families of finite and infinite words. In this paper, we obtain string attractors of prefixes of particular infinite words generalizing k-bonacci words (including the famous Fibonacci word) and related to simple Parry numbers. In fact, our description involves the numeration systems classically derived from the considered morphisms. This extends our previous work published in the international conference WORDS 2023.

Reed--Solomon codes are a classic family of error-correcting codes consisting of evaluations of low-degree polynomials over a finite field on some sequence of distinct field elements. They are widely known for their optimal unique-decoding capabilities, but their list-decoding capabilities are not fully understood. Given the prevalence of Reed-Solomon codes, a fundamental question in coding theory is determining if Reed--Solomon codes can optimally achieve list-decoding capacity. A recent breakthrough by Brakensiek, Gopi, and Makam, established that Reed--Solomon codes are combinatorially list-decodable all the way to capacity. However, their results hold for randomly-punctured Reed--Solomon codes over an exponentially large field size $2^{O(n)}$, where $n$ is the block length of the code. A natural question is whether Reed--Solomon codes can still achieve capacity over smaller fields. Recently, Guo and Zhang showed that Reed--Solomon codes are list-decodable to capacity with field size $O(n^2)$. We show that Reed--Solomon codes are list-decodable to capacity with linear field size $O(n)$, which is optimal up to the constant factor. We also give evidence that the ratio between the alphabet size $q$ and code length $n$ cannot be bounded by an absolute constant. Our techniques also show that random linear codes are list-decodable up to (the alphabet-independent) capacity with optimal list-size $O(1/\varepsilon)$ and near-optimal alphabet size $2^{O(1/\varepsilon^2)}$, where $\varepsilon$ is the gap to capacity. As far as we are aware, list-decoding up to capacity with optimal list-size $O(1/\varepsilon)$ was previously not known to be achievable with any linear code over a constant alphabet size (even non-constructively). Our proofs are based on the ideas of Guo and Zhang, and we additionally exploit symmetries of reduced intersection matrices.

In this paper we investigate the existence of subexponential parameterized algorithms of three fundamental cycle-hitting problems in geometric graph classes. The considered problems, \textsc{Triangle Hitting} (TH), \textsc{Feedback Vertex Set} (FVS), and \textsc{Odd Cycle Transversal} (OCT) ask for the existence in a graph $G$ of a set $X$ of at most $k$ vertices such that $G-X$ is, respectively, triangle-free, acyclic, or bipartite. Such subexponential parameterized algorithms are known to exist in planar and even $H$-minor free graphs from bidimensionality theory [Demaine et al., JACM 2005], and there is a recent line of work lifting these results to geometric graph classes consisting of intersection of "fat" objects ([Grigoriev et al., FOCS 2022] and [Lokshtanov et al., SODA 2022]). In this paper we focus on "thin" objects by considering intersection graphs of segments in the plane with $d$ possible slopes ($d$-DIR graphs) and contact graphs of segments in the plane. Assuming the ETH, we rule out the existence of algorithms: - solving TH in time $2^{o(n)}$ in 2-DIR graphs; and - solving TH, FVS, and OCT in time $2^{o(\sqrt{n})}$ in $K_{2,2}$-free contact 2-DIR graphs. These results indicate that additional restrictions are necessary in order to obtain subexponential parameterized algorithms for %these problems. In this direction we provide: - a $2^{O(k^{3/4}\cdot \log k)}n^{O(1)}$-time algorithm for FVS in contact segment graphs; - a $2^{O(\sqrt d\cdot t^2 \log t\cdot k^{2/3}\log k)} n^{O(1)}$-time algorithm for TH in $K_{t,t}$-free $d$-DIR graphs; and - a $2^{O(k^{7/9}\log^{3/2}k)} n^{O(1)}$-time algorithm for TH in contact segment graphs.

In numerical simulations a smooth domain occupied by a fluid has to be approximated by a computational domain that typically does not coincide with a physical domain. Consequently, in order to study convergence and error estimates of a numerical method domain-related discretization errors, the so-called variational crimes, need to be taken into account. In this paper we present an elegant alternative to a direct, but rather technical, analysis of variational crimes by means of the penalty approach. We embed the physical domain into a large enough cubed domain and study the convergence of a finite volume method for the corresponding domain-penalized problem. We show that numerical solutions of the penalized problem converge to a generalized, the so-called dissipative weak, solution of the original problem. If a strong solution exists, the dissipative weak solution emanating from the same initial data coincides with the strong solution. In this case, we apply a novel tool of the relative energy and derive the error estimates between the numerical solution and the strong solution. Extensive numerical experiments that confirm theoretical results are presented.

In this work, we present an efficient approach to solve nonlinear high-contrast multiscale diffusion problems. We incorporate the explicit-implicit-null (EIN) method to separate the nonlinear term into a linear term and a damping term, and then utilise the implicit and explicit time marching scheme for the two parts respectively. Due to the multiscale property of the linear part, we further introduce a temporal partially explicit splitting scheme and construct suitable multiscale subspaces to speed up the computation. The approximated solution is splitted into these subspaces associated with different physics. The temporal splitting scheme employs implicit discretization in the subspace with small dimension that representing the high-contrast property and uses explicit discretization for the other subspace. We exploit the stability of the proposed scheme and give the condition for the choice of the linear diffusion coefficient. The convergence of the proposed method is provided. Several numerical tests are performed to show the efficiency and accuracy of the proposed approach.

北京阿比特科技有限公司