亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper investigates the use of word surprisal, a measure of the predictability of a word in a given context, as a feature to aid speech synthesis prosody. We explore how word surprisal extracted from large language models (LLMs) correlates with word prominence, a signal-based measure of the salience of a word in a given discourse. We also examine how context length and LLM size affect the results, and how a speech synthesizer conditioned with surprisal values compares with a baseline system. To evaluate these factors, we conducted experiments using a large corpus of English text and LLMs of varying sizes. Our results show that word surprisal and word prominence are moderately correlated, suggesting that they capture related but distinct aspects of language use. We find that length of context and size of the LLM impact the correlations, but not in the direction anticipated, with longer contexts and larger LLMs generally underpredicting prominent words in a nearly linear manner. We demonstrate that, in line with these findings, a speech synthesizer conditioned with surprisal values provides a minimal improvement over the baseline with the results suggesting a limited effect of using surprisal values for eliciting appropriate prominence patterns.

相關內容

This paper presents an algorithm for finding the optimal configuration of active reconfigurable intelligent surface (RIS) when both transmitter and receiver are equipped with a single antenna each. The resultant configuration is globally optimal and it takes linear time for the computation. Moreover, there is a closed-form expression for the optimal configuration when the direct link vanishes, which enables further analysis.

The primary objective of this paper is to investigate distributed dynamic programming (DP) and distributed temporal difference (TD) learning algorithms for networked multi-agent Markov decision problems (MAMDPs). In our study, we adopt a distributed multi-agent framework where individual agents have access only to their own rewards, lacking insights into the rewards of other agents. Additionally, each agent has the ability to share its parameters with neighboring agents through a communication network, represented by a graph. Our contributions can be summarized in two key points: 1) We introduce a novel distributed DP, inspired by the averaging consensus method in the continuous-time domain. The convergence of this DP is assessed through control theory perspectives. 2) Building upon the aforementioned DP, we devise a new distributed TD-learning algorithm and prove its convergence. A standout feature of our proposed distributed DP is its incorporation of two independent dynamic systems, each with a distinct role. This characteristic sets the stage for a novel distributed TD-learning strategy, the convergence of which can be directly established using the Borkar-Meyn theorem.

In the last decades, the necessity to process massive amounts of textual data fueled the development of compressed text indexes: data structures efficiently answering queries on a given text while occupying space proportional to the compressed representation of the text. A widespread phenomenon in compressed indexing is that more powerful queries require larger indexes. For example, random access, the most basic query, can be supported in $O(\delta\log\frac{n\log\sigma}{\delta\log n})$ space (where $n$ is the text length, $\sigma$ is the alphabet size, and $\delta$ is text's substring complexity), which is the asymptotically smallest space to represent a string, for all $n$, $\sigma$, and $\delta$ (Kociumaka, Navarro, Prezza; IEEE Trans. Inf. Theory 2023). The other end of the hierarchy is occupied by indexes supporting the powerful suffix array (SA) queries. The currently smallest one takes $O(r\log\frac{n}{r})$ space, where $r\geq\delta$ is the number of runs in the BWT of the text (Gagie, Navarro, Prezza; J. ACM 2020). We present a new compressed index that needs only $O(\delta\log\frac{n\log\sigma}{\delta\log n})$ space to support SA functionality in $O(\log^{4+\epsilon} n)$ time. This collapses the hierarchy of compressed data structures into a single point: The space required to represent the text is simultaneously sufficient for efficient SA queries. Our result immediately improves the space complexity of dozens of algorithms, which can now be executed in optimal compressed space. In addition, we show how to construct our index in $O(\delta\text{ polylog } n)$ time from the LZ77 parsing of the text. For highly repetitive texts, this is up to exponentially faster than the previously best algorithm. To obtain our results, we develop numerous techniques of independent interest, including the first $O(\delta\log\frac{n\log\sigma}{\delta\log n})$-size index for LCE queries.

The focus of this paper is on the concurrent reconstruction of both the diffusion and potential coefficients present in an elliptic/parabolic equation, utilizing two internal measurements of the solutions. A decoupled algorithm is constructed to sequentially recover these two parameters. In the first step, we implement a straightforward reformulation that results in a standard problem of identifying the diffusion coefficient. This coefficient is then numerically recovered, with no requirement for knowledge of the potential, by utilizing an output least-square method coupled with finite element discretization. In the second step, the previously recovered diffusion coefficient is employed to reconstruct the potential coefficient, applying a method similar to the first step. Our approach is stimulated by a constructive conditional stability, and we provide rigorous a priori error estimates in $L^2(\Omega)$ for the recovered diffusion and potential coefficients. Our approach is stimulated by a constructive conditional stability, and we provide rigorous a priori error estimates in $L^2(\Omega)$ for the recovered diffusion and potential coefficients. To derive these estimates, we develop a weighted energy argument and suitable positivity conditions. These estimates offer a beneficial guide for choosing regularization parameters and discretization mesh sizes, in accordance with the noise level. Some numerical experiments are presented to demonstrate the accuracy of the numerical scheme and support our theoretical results.

This paper proposes a computationally efficient framework, based on interval analysis, for rigorous verification of nonlinear continuous-time dynamical systems with neural network controllers. Given a neural network, we use an existing verification algorithm to construct inclusion functions for its input-output behavior. Inspired by mixed monotone theory, we embed the closed-loop dynamics into a larger system using an inclusion function of the neural network and a decomposition function of the open-loop system. This embedding provides a scalable approach for safety analysis of the neural control loop while preserving the nonlinear structure of the system. We show that one can efficiently compute hyper-rectangular over-approximations of the reachable sets using a single trajectory of the embedding system. We design an algorithm to leverage this computational advantage through partitioning strategies, improving our reachable set estimates while balancing its runtime with tunable parameters. We demonstrate the performance of this algorithm through two case studies. First, we demonstrate this method's strength in complex nonlinear environments. Then, we show that our approach matches the performance of the state-of-the art verification algorithm for linear discretized systems.

This paper introduces a large collection of time series data derived from Twitter, postprocessed using word embedding techniques, as well as specialized fine-tuned language models. This data comprises the past five years and captures changes in n-gram frequency, similarity, sentiment and topic distribution. The interface built on top of this data enables temporal analysis for detecting and characterizing shifts in meaning, including complementary information to trending metrics, such as sentiment and topic association over time. We release an online demo for easy experimentation, and we share code and the underlying aggregated data for future work. In this paper, we also discuss three case studies unlocked thanks to our platform, showcasing its potential for temporal linguistic analysis.

Many complex engineering systems can be represented in a topological form, such as graphs. This paper utilizes a machine learning technique called Geometric Deep Learning (GDL) to aid designers with challenging, graph-centric design problems. The strategy presented here is to take the graph data and apply GDL to seek the best realizable performing solution effectively and efficiently with lower computational costs. This case study used here is the synthesis of analog electrical circuits that attempt to match a specific frequency response within a particular frequency range. Previous studies utilized an enumeration technique to generate 43,249 unique undirected graphs presenting valid potential circuits. Unfortunately, determining the sizing and performance of many circuits can be too expensive. To reduce computational costs with a quantified trade-off in accuracy, the fraction of the circuit graphs and their performance are used as input data to a classification-focused GDL model. Then, the GDL model can be used to predict the remainder cheaply, thus, aiding decision-makers in the search for the best graph solutions. The results discussed in this paper show that additional graph-based features are useful, favorable total set classification accuracy of 80\% in using only 10\% of the graphs, and iteratively-built GDL models can further subdivide the graphs into targeted groups with medians significantly closer to the best and containing 88.2 of the top 100 best-performing graphs on average using 25\% of the graphs.

This paper proposes a machine learning-based approach for detecting the exploitation of vulnerabilities in the wild by monitoring underground hacking forums. The increasing volume of posts discussing exploitation in the wild calls for an automatic approach to process threads and posts that will eventually trigger alarms depending on their content. To illustrate the proposed system, we use the CrimeBB dataset, which contains data scraped from multiple underground forums, and develop a supervised machine learning model that can filter threads citing CVEs and label them as Proof-of-Concept, Weaponization, or Exploitation. Leveraging random forests, we indicate that accuracy, precision and recall above 0.99 are attainable for the classification task. Additionally, we provide insights into the difference in nature between weaponization and exploitation, e.g., interpreting the output of a decision tree, and analyze the profits and other aspects related to the hacking communities. Overall, our work sheds insight into the exploitation of vulnerabilities in the wild and can be used to provide additional ground truth to models such as EPSS and Expected Exploitability.

In pace with developments in the research field of artificial intelligence, knowledge graphs (KGs) have attracted a surge of interest from both academia and industry. As a representation of semantic relations between entities, KGs have proven to be particularly relevant for natural language processing (NLP), experiencing a rapid spread and wide adoption within recent years. Given the increasing amount of research work in this area, several KG-related approaches have been surveyed in the NLP research community. However, a comprehensive study that categorizes established topics and reviews the maturity of individual research streams remains absent to this day. Contributing to closing this gap, we systematically analyzed 507 papers from the literature on KGs in NLP. Our survey encompasses a multifaceted review of tasks, research types, and contributions. As a result, we present a structured overview of the research landscape, provide a taxonomy of tasks, summarize our findings, and highlight directions for future work.

We address the task of automatically scoring the competency of candidates based on textual features, from the automatic speech recognition (ASR) transcriptions in the asynchronous video job interview (AVI). The key challenge is how to construct the dependency relation between questions and answers, and conduct the semantic level interaction for each question-answer (QA) pair. However, most of the recent studies in AVI focus on how to represent questions and answers better, but ignore the dependency information and interaction between them, which is critical for QA evaluation. In this work, we propose a Hierarchical Reasoning Graph Neural Network (HRGNN) for the automatic assessment of question-answer pairs. Specifically, we construct a sentence-level relational graph neural network to capture the dependency information of sentences in or between the question and the answer. Based on these graphs, we employ a semantic-level reasoning graph attention network to model the interaction states of the current QA session. Finally, we propose a gated recurrent unit encoder to represent the temporal question-answer pairs for the final prediction. Empirical results conducted on CHNAT (a real-world dataset) validate that our proposed model significantly outperforms text-matching based benchmark models. Ablation studies and experimental results with 10 random seeds also show the effectiveness and stability of our models.

北京阿比特科技有限公司