亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Motivated by creating physical theories, formal languages $S$ with variables are considered and a kind of distance between elements of the languages is defined by the formula $d(x,y)= \ell(x \nabla y) - \ell(x) \wedge \ell(y)$, where $\ell$ is a length function and $x \nabla y$ means the united theory of $x$ and $y$. Actually we mainly consider abstract abelian idempotent monoids $(S,\nabla)$ provided with length functions $\ell$. The set of length functions can be projected to another set of length functions such that the distance $d$ is actually a pseudometric and satisfies $d(x\nabla a,y\nabla b) \le d(x,y) + d(a,b)$. We also propose a "signed measure" on the set of Boolean expressions of elements in $S$, and a Banach-Mazur-like distance between abelian, idempotent monoids with length functions, or formal languages.

相關內容

One tuple of probability vectors is more informative than another tuple when there exists a single stochastic matrix transforming the probability vectors of the first tuple into the probability vectors of the other. This is called matrix majorization. Solving an open problem raised by Mu et al, we show that if certain monotones - namely multivariate extensions of R\'{e}nyi divergences - are strictly ordered between the two tuples, then for sufficiently large $n$, there exists a stochastic matrix taking the $n$-fold Kronecker power of each input distribution to the $n$-fold Kronecker power of the corresponding output distribution. The same conditions, with non-strict ordering for the monotones, are also necessary for such matrix majorization in large samples. Our result also gives conditions for the existence of a sequence of statistical maps that asymptotically (with vanishing error) convert a single copy of each input distribution to the corresponding output distribution with the help of a catalyst that is returned unchanged. Allowing for transformation with arbitrarily small error, we find conditions that are both necessary and sufficient for such catalytic matrix majorization. We derive our results by building on a general algebraic theory of preordered semirings recently developed by one of the authors. This also allows us to recover various existing results on majorization in large samples and in the catalytic regime as well as relative majorization in a unified manner.

Accent conversion aims to convert the accent of a source speech to a target accent, meanwhile preserving the speaker's identity. This paper introduces a novel non-autoregressive framework for accent conversion that learns accent-agnostic linguistic representations and employs them to convert the accent in the source speech. Specifically, the proposed system aligns speech representations with linguistic representations obtained from Text-to-Speech (TTS) systems, enabling training of the accent voice conversion model on non-parallel data. Furthermore, we investigate the effectiveness of a pretraining strategy on native data and different acoustic features within our proposed framework. We conduct a comprehensive evaluation using both subjective and objective metrics to assess the performance of our approach. The evaluation results highlight the benefits of the pretraining strategy and the incorporation of richer semantic features, resulting in significantly enhanced audio quality and intelligibility.

MDS codes have diverse practical applications in communication systems, data storage, and quantum codes due to their algebraic properties and optimal error-correcting capability. In this paper, we focus on a class of linear codes and establish some sufficient and necessary conditions for them being MDS. Notably, these codes differ from Reed-Solomon codes up to monomial equivalence. Additionally, we also explore the cases in which these codes are almost MDS or near MDS. Applying our main results, we determine the covering radii and deep holes of the dual codes associated with specific Roth-Lempel codes and discover an infinite family of (almost) optimally extendable codes with dimension three.

Multiple-conclusion Hilbert-style systems allow us to finitely axiomatize every logic defined by a finite matrix. Having obtained such axiomatizations for Paraconsistent Weak Kleene and Bochvar-Kleene logics, we modify them by replacing the multiple-conclusion rules with carefully selected single-conclusion ones. In this way we manage to introduce the first finite Hilbert-style single-conclusion axiomatizations for these logics.

Large language models are known for encoding a vast amount of factual knowledge, but they often becomes outdated due to the ever-changing nature of external information. A promising solution to this challenge is the utilization of model editing methods to update the knowledge in an efficient manner. However, the majority of existing model editing techniques are limited to monolingual frameworks, thus failing to address the crucial issue of cross-lingual knowledge synchronization for multilingual models. To tackle this problem, we propose a simple yet effective method that trains multilingual patch neuron to store cross-lingual knowledge. It can be easily adapted to existing approaches to enhance their cross-lingual editing capabilities. To evaluate our method, we conduct experiments using both the XNLI dataset and a self-constructed XFEVER dataset. Experimental results demonstrate that our proposed method achieves improved performance in cross-lingual editing tasks without requiring excessive modifications to the original methodology, thereby showcasing its user-friendly characteristics. Codes will be released soon.

Machine learning methods based on AdaBoost have been widely applied to various classification problems across many mission-critical applications including healthcare, law and finance. However, there is a growing concern about the unfairness and discrimination of data-driven classification models, which is inevitable for classical algorithms including AdaBoost. In order to achieve fair classification, a novel fair AdaBoost (FAB) approach is proposed that is an interpretable fairness-improving variant of AdaBoost. We mainly investigate binary classification problems and focus on the fairness of three different indicators (i.e., accuracy, false positive rate and false negative rate). By utilizing a fairness-aware reweighting technique for base classifiers, the proposed FAB approach can achieve fair classification while maintaining the advantage of AdaBoost with negligible sacrifice of predictive performance. In addition, a hyperparameter is introduced in FAB to show preferences for the fairness-accuracy trade-off. An upper bound for the target loss function that quantifies error rate and unfairness is theoretically derived for FAB, which provides a strict theoretical support for the fairness-improving methods designed for AdaBoost. The effectiveness of the proposed method is demonstrated on three real-world datasets (i.e., Adult, COMPAS and HSLS) with respect to the three fairness indicators. The results are accordant with theoretic analyses, and show that (i) FAB significantly improves classification fairness at a small cost of accuracy compared with AdaBoost; and (ii) FAB outperforms state-of-the-art fair classification methods including equalized odds method, exponentiated gradient method, and disparate mistreatment method in terms of the fairness-accuracy trade-off.

The proximal gradient method is a generic technique introduced to tackle the non-smoothness in optimization problems, wherein the objective function is expressed as the sum of a differentiable convex part and a non-differentiable regularization term. Such problems with tensor format are of interest in many fields of applied mathematics such as image and video processing. Our goal in this paper is to address the solution of such problems with a more general form of the regularization term. An adapted iterative proximal gradient method is introduced for this purpose. Due to the slowness of the proposed algorithm, we use new tensor extrapolation methods to enhance its convergence. Numerical experiments on color image deblurring are conducted to illustrate the efficiency of our approach.

Zero-knowledge proofs (zk-Proofs) are communication protocols by which a prover can demonstrate to a verifier that it possesses a solution to a given public problem without revealing the content of the solution. Arbitrary computations can be transformed into an interactive zk-Proof so anyone is convinced that it was executed correctly without knowing what was executed on, having huge implications for digital currency. Despite this, interactive proofs are not suited for blockchain applications but novel protocols such as zk-SNARKs have made zero-knowledge ledgers like Zcash possible. This project builds upon Wolfram's ZeroKnowledgeProofs paclet and implements a zk-SNARK compiler based on Pinocchio protocol.

Human trajectory forecasting is a critical challenge in fields such as robotics and autonomous driving. Due to the inherent uncertainty of human actions and intentions in real-world scenarios, various unexpected occurrences may arise. To uncover latent motion patterns in human behavior, we introduce a novel memory-based method, named Motion Pattern Priors Memory Network. Our method involves constructing a memory bank derived from clustered prior knowledge of motion patterns observed in the training set trajectories. We introduce an addressing mechanism to retrieve the matched pattern and the potential target distributions for each prediction from the memory bank, which enables the identification and retrieval of natural motion patterns exhibited by agents, subsequently using the target priors memory token to guide the diffusion model to generate predictions. Extensive experiments validate the effectiveness of our approach, achieving state-of-the-art trajectory prediction accuracy. The code will be made publicly available.

In this contribution we deal with Gaussian quadrature rules based on orthogonal polynomials associated with a weight function $w(x)= x^{\alpha} e^{-x}$ supported on an interval $(0,z)$, $z>0.$ The modified Chebyshev algorithm is used in order to test the accuracy in the computation of the coefficients of the three-term recurrence relation, the zeros and weights, as well as the dependence on the parameter $z.$

北京阿比特科技有限公司