亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper discusses one of the most challenging practical engineering problems in speaker recognition systems - the version control of models and user profiles. A typical speaker recognition system consists of two stages: the enrollment stage, where a profile is generated from user-provided enrollment audio; and the runtime stage, where the voice identity of the runtime audio is compared against the stored profiles. As technology advances, the speaker recognition system needs to be updated for better performance. However, if the stored user profiles are not updated accordingly, version mismatch will result in meaningless recognition results. In this paper, we describe different version control strategies for speaker recognition systems that had been carefully studied at Google from years of engineering practice. These strategies are categorized into three groups according to how they are deployed in the production environment: device-side deployment, server-side deployment, and hybrid deployment. To compare different strategies with quantitative metrics under various network configurations, we present SpeakerVerSim, an easily-extensible Python-based simulation framework for different server-side deployment strategies of speaker recognition systems.

相關內容

說(shuo)(shuo)話(hua)(hua)人(ren)識(shi)(shi)別(bie)(Speaker Recognition),或者稱為聲紋識(shi)(shi)別(bie)(Voiceprint Recognition, VPR),是(shi)根據語音中所包含的說(shuo)(shuo)話(hua)(hua)人(ren)個性(xing)信息,利用計(ji)算機以及(ji)現(xian)在的信息識(shi)(shi)別(bie)技術,自動(dong)(dong)鑒別(bie)說(shuo)(shuo)話(hua)(hua)人(ren)身(shen)份的一(yi)種生(sheng)物特(te)征(zheng)(zheng)(zheng)識(shi)(shi)別(bie)技術。 說(shuo)(shuo)話(hua)(hua)人(ren)識(shi)(shi)別(bie)研究的目的就是(shi)從語音中提取具有說(shuo)(shuo)話(hua)(hua)人(ren)表(biao)征(zheng)(zheng)(zheng)性(xing)的特(te)征(zheng)(zheng)(zheng),建立(li)有 效的模型(xing)和系統(tong),實現(xian)自動(dong)(dong)精準(zhun)的說(shuo)(shuo)話(hua)(hua)人(ren)鑒別(bie)。

In the complex domain of neural information processing, discerning fundamental principles from ancillary details remains a significant challenge. While there is extensive knowledge about the anatomy and physiology of the early visual system, a comprehensive computational theory remains elusive. Can we gain insights into the underlying principles of a biological system by abstracting away from its detailed implementation and focusing on the fundamental problems that the system is designed to solve? Utilizing an abstract model based on minimal yet realistic assumptions, we show how to achieve the early visual system's two ultimate objectives: efficient information transmission and sensor probability distribution modeling. We show that optimizing for information transmission does not yield optimal probability distribution modeling. We illustrate, using a two-pixel (2D) system and image patches, that an efficient representation can be realized via nonlinear population code driven by two types of biologically plausible loss functions that depend solely on output. After unsupervised learning, our abstract IPU model bears remarkable resemblances to biological systems, despite not mimicking many features of real neurons, such as spiking activity. A preliminary comparison with a contemporary deep learning model suggests that the IPU model offers a significant efficiency advantage. Our model provides novel insights into the computational theory of early visual systems as well as a potential new approach to enhance the efficiency of deep learning models.

This paper studies the discretization of a homogenization and dimension reduction model for the elastic deformation of microstructured thin plates proposed by Hornung, Neukamm, and Vel\v{c}i\'c in 2014. Thereby, a nonlinear bending energy is based on a homogenized quadratic form which acts on the second fundamental form associated with the elastic deformation. Convergence is proven for a multi-affine finite element discretization of the involved three-dimensional microscopic cell problems and a discrete Kirchhoff triangle discretization of the two-dimensional isometry-constrained macroscopic problem. Finally, the convergence properties are numerically verified in selected test cases and qualitatively compared with deformation experiments for microstructured sheets of paper.

This paper shows the initial stages of development, from first principles, of a formal logic to characterise and then explore issues in a broadly defined idea of Veracity.

With the rise of foundation models, a new artificial intelligence paradigm has emerged, by simply using general purpose foundation models with prompting to solve problems instead of training a separate machine learning model for each problem. Such models have been shown to have emergent properties of solving problems that they were not initially trained on. The studies for the effectiveness of such models are still quite limited. In this work, we widely study the capabilities of the ChatGPT models, namely GPT-4 and GPT-3.5, on 13 affective computing problems, namely aspect extraction, aspect polarity classification, opinion extraction, sentiment analysis, sentiment intensity ranking, emotions intensity ranking, suicide tendency detection, toxicity detection, well-being assessment, engagement measurement, personality assessment, sarcasm detection, and subjectivity detection. We introduce a framework to evaluate the ChatGPT models on regression-based problems, such as intensity ranking problems, by modelling them as pairwise ranking classification. We compare ChatGPT against more traditional NLP methods, such as end-to-end recurrent neural networks and transformers. The results demonstrate the emergent abilities of the ChatGPT models on a wide range of affective computing problems, where GPT-3.5 and especially GPT-4 have shown strong performance on many problems, particularly the ones related to sentiment, emotions, or toxicity. The ChatGPT models fell short for problems with implicit signals, such as engagement measurement and subjectivity detection.

Performing bimanual tasks with dual robotic setups can drastically increase the impact on industrial and daily life applications. However, performing a bimanual task brings many challenges, like synchronization and coordination of the single-arm policies. This article proposes the Safe, Interactive Movement Primitives Learning (SIMPLe) algorithm, to teach and correct single or dual arm impedance policies directly from human kinesthetic demonstrations. Moreover, it proposes a novel graph encoding of the policy based on Gaussian Process Regression (GPR) where the single-arm motion is guaranteed to converge close to the trajectory and then towards the demonstrated goal. Regulation of the robot stiffness according to the epistemic uncertainty of the policy allows for easily reshaping the motion with human feedback and/or adapting to external perturbations. We tested the SIMPLe algorithm on a real dual-arm setup where the teacher gave separate single-arm demonstrations and then successfully synchronized them only using kinesthetic feedback or where the original bimanual demonstration was locally reshaped to pick a box at a different height.

This paper studies how to use relation algebras, which are useful for high-level specification and verification, for proving the correctness of lower-level array-based implementations of algorithms. We give a simple relation-algebraic semantics of read and write operations on associative arrays. The array operations seamlessly integrate with assignments in computation models supporting while-programs. As a result, relation algebras can be used for verifying programs with associative arrays. We verify the correctness of an array-based implementation of disjoint-set forests using the union-by-rank strategy and find operations with path compression, path splitting and path halving. All results are formally proved in Isabelle/HOL. This paper is an extended version of [1].

This paper surveys research works in the quickly advancing field of instruction tuning (IT), a crucial technique to enhance the capabilities and controllability of large language models (LLMs). Instruction tuning refers to the process of further training LLMs on a dataset consisting of \textsc{(instruction, output)} pairs in a supervised fashion, which bridges the gap between the next-word prediction objective of LLMs and the users' objective of having LLMs adhere to human instructions. In this work, we make a systematic review of the literature, including the general methodology of IT, the construction of IT datasets, the training of IT models, and applications to different modalities, domains and applications, along with an analysis on aspects that influence the outcome of IT (e.g., generation of instruction outputs, size of the instruction dataset, etc). We also review the potential pitfalls of IT along with criticism against it, along with efforts pointing out current deficiencies of existing strategies and suggest some avenues for fruitful research.

This paper offers a comprehensive review of the research on Natural Language Generation (NLG) over the past two decades, especially in relation to data-to-text generation and text-to-text generation deep learning methods, as well as new applications of NLG technology. This survey aims to (a) give the latest synthesis of deep learning research on the NLG core tasks, as well as the architectures adopted in the field; (b) detail meticulously and comprehensively various NLG tasks and datasets, and draw attention to the challenges in NLG evaluation, focusing on different evaluation methods and their relationships; (c) highlight some future emphasis and relatively recent research issues that arise due to the increasing synergy between NLG and other artificial intelligence areas, such as computer vision, text and computational creativity.

This paper serves as a survey of recent advances in large margin training and its theoretical foundations, mostly for (nonlinear) deep neural networks (DNNs) that are probably the most prominent machine learning models for large-scale data in the community over the past decade. We generalize the formulation of classification margins from classical research to latest DNNs, summarize theoretical connections between the margin, network generalization, and robustness, and introduce recent efforts in enlarging the margins for DNNs comprehensively. Since the viewpoint of different methods is discrepant, we categorize them into groups for ease of comparison and discussion in the paper. Hopefully, our discussions and overview inspire new research work in the community that aim to improve the performance of DNNs, and we also point to directions where the large margin principle can be verified to provide theoretical evidence why certain regularizations for DNNs function well in practice. We managed to shorten the paper such that the crucial spirit of large margin learning and related methods are better emphasized.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

北京阿比特科技有限公司