亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

To address the challenge of identifying hidden danger in substations from unstructured text, a novel dynamic analysis method is proposed. We first extract relevant information from the unstructured text, and then leverages a flexible distributed search engine built on Elastic-Search to handle the data. Following this, the hidden Markov model is employed to train the data within the engine. The Viterbi algorithm is integrated to decipher the hidden state sequences, facilitating the segmentation and labeling of entities related to hidden dangers. The final step involves using the Neo4j graph database to dynamically create a knowledge graph that visualizes hidden dangers in the substation. The effectiveness of the proposed method is demonstrated through a case analysis from a specific substation with hidden dangers revealed in the text records.

相關內容

In this paper, the problem of joint transmission and computation resource allocation for a multi-user probabilistic semantic communication (PSC) network is investigated. In the considered model, users employ semantic information extraction techniques to compress their large-sized data before transmitting them to a multi-antenna base station (BS). Our model represents large-sized data through substantial knowledge graphs, utilizing shared probability graphs between the users and the BS for efficient semantic compression. The resource allocation problem is formulated as an optimization problem with the objective of maximizing the sum of equivalent rate of all users, considering total power budget and semantic resource limit constraints. The computation load considered in the PSC network is formulated as a non-smooth piecewise function with respect to the semantic compression ratio. To tackle this non-convex non-smooth optimization challenge, a three-stage algorithm is proposed where the solutions for the receive beamforming matrix of the BS, transmit power of each user, and semantic compression ratio of each user are obtained stage by stage. Numerical results validate the effectiveness of our proposed scheme.

We study how to characterize and predict the truthfulness of texts generated from large language models (LLMs), which serves as a crucial step in building trust between humans and LLMs. Although several approaches based on entropy or verbalized uncertainty have been proposed to calibrate model predictions, these methods are often intractable, sensitive to hyperparameters, and less reliable when applied in generative tasks with LLMs. In this paper, we suggest investigating internal activations and quantifying LLM's truthfulness using the local intrinsic dimension (LID) of model activations. Through experiments on four question answering (QA) datasets, we demonstrate the effectiveness o//info.arxiv.org/help/prep#abstractsf our proposed method. Additionally, we study intrinsic dimensions in LLMs and their relations with model layers, autoregressive language modeling, and the training of LLMs, revealing that intrinsic dimensions can be a powerful approach to understanding LLMs.

Elliptic reconstruction property, originally introduced by Makridakis and Nochetto for linear parabolic problems, is a well-known tool to derive optimal a posteriori error estimates. No such results are known for nonlinear and nonsmooth problems such as parabolic variational inequalities (VIs). This article establishes the elliptic reconstruction property for parabolic VIs and derives a posteriori error estimates in $L^{\infty}(0,T;L^{2}(\Omega))$ and $L^{\infty}(0,T;L^{\infty}(\Omega))$, respectively. As an application, the residual-type error estimates are presented.

This study proposes a novel method for forecasting a scalar variable based on high-dimensional predictors that is applicable to various data distributions. In the literature, one of the popular approaches for forecasting with many predictors is to use factor models. However, these traditional methods are ineffective when the data exhibit non-Gaussian characteristics such as skewness or heavy tails. In this study, we newly utilize a quantile factor model to extract quantile factors that describe specific quantiles of the data beyond the mean factor. We then build a quantile-based forecast model using the estimated quantile factors at different quantile levels as predictors. Finally, the predicted values at the various quantile levels are combined into a single forecast as a weighted average with weights determined by a Markov chain based on past trends of the target variable. The main idea of the proposed method is to incorporate a quantile approach to a forecasting method to handle non-Gaussian characteristics effectively. The performance of the proposed method is evaluated through a simulation study and real data analysis of PM2.5 data in South Korea, where the proposed method outperforms other existing methods in most cases.

Online auction scenarios, such as bidding searches on advertising platforms, often require bidders to participate repeatedly in auctions for identical or similar items. Most previous studies have only considered the process by which the seller learns the prior-dependent optimal mechanism in a repeated auction. However, in this paper, we define a multiagent reinforcement learning environment in which strategic bidders and the seller learn their strategies simultaneously and design an automatic bidding algorithm that updates the strategy of bidders through online interactions. We propose Bid Net to replace the linear shading function as a representation of the strategic bidders' strategy, which effectively improves the utility of strategy learned by bidders. We apply and revise the opponent modeling methods to design the PG (pseudo-gradient) algorithm, which allows bidders to learn optimal bidding strategies with predictions of the other agents' strategy transition. We prove that when a bidder uses the PG algorithm, it can learn the best response to static opponents. When all bidders adopt the PG algorithm, the system will converge to the equilibrium of the game induced by the auction. In experiments with diverse environmental settings and varying opponent strategies, the PG algorithm maximizes the utility of bidders. We hope that this article will inspire research on automatic bidding strategies for strategic bidders.

The Knapsack problem is one of the most fundamental NP-complete problems at the intersection of computer science, optimization, and operations research. A recent line of research worked towards understanding the complexity of pseudopolynomial-time algorithms for Knapsack parameterized by the maximum item weight $w_{\mathrm{max}}$ and the number of items $n$. A conditional lower bound rules out that Knapsack can be solved in time $O((n+w_{\mathrm{max}})^{2-\delta})$ for any $\delta > 0$ [Cygan, Mucha, Wegrzycki, Wlodarczyk'17, K\"unnemann, Paturi, Schneider'17]. This raised the question whether Knapsack can be solved in time $\tilde O((n+w_{\mathrm{max}})^2)$. This was open both for 0-1-Knapsack (where each item can be picked at most once) and Bounded Knapsack (where each item comes with a multiplicity). The quest of resolving this question lead to algorithms that solve Bounded Knapsack in time $\tilde O(n^3 w_{\mathrm{max}}^2)$ [Tamir'09], $\tilde O(n^2 w_{\mathrm{max}}^2)$ and $\tilde O(n w_{\mathrm{max}}^3)$ [Bateni, Hajiaghayi, Seddighin, Stein'18], $O(n^2 w_{\mathrm{max}}^2)$ and $\tilde O(n w_{\mathrm{max}}^2)$ [Eisenbrand and Weismantel'18], $O(n + w_{\mathrm{max}}^3)$ [Polak, Rohwedder, Wegrzycki'21], and very recently $\tilde O(n + w_{\mathrm{max}}^{12/5})$ [Chen, Lian, Mao, Zhang'23]. In this paper we resolve this question by designing an algorithm for Bounded Knapsack with running time $\tilde O(n + w_{\mathrm{max}}^2)$, which is conditionally near-optimal. This resolves the question both for the classic 0-1-Knapsack problem and for the Bounded Knapsack problem.

High-dimensional problems have long been considered the Achilles' heel of Bayesian optimization algorithms. Spurred by the curse of dimensionality, a large collection of algorithms aim to make it more performant in this setting, commonly by imposing various simplifying assumptions on the objective. In this paper, we identify the degeneracies that make vanilla Bayesian optimization poorly suited to high-dimensional tasks, and further show how existing algorithms address these degeneracies through the lens of lowering the model complexity. Moreover, we propose an enhancement to the prior assumptions that are typical to vanilla Bayesian optimization algorithms, which reduces the complexity to manageable levels without imposing structural restrictions on the objective. Our modification - a simple scaling of the Gaussian process lengthscale prior with the dimensionality - reveals that standard Bayesian optimization works drastically better than previously thought in high dimensions, clearly outperforming existing state-of-the-art algorithms on multiple commonly considered real-world high-dimensional tasks.

Data augmentation, the artificial creation of training data for machine learning by transformations, is a widely studied research field across machine learning disciplines. While it is useful for increasing the generalization capabilities of a model, it can also address many other challenges and problems, from overcoming a limited amount of training data over regularizing the objective to limiting the amount data used to protect privacy. Based on a precise description of the goals and applications of data augmentation (C1) and a taxonomy for existing works (C2), this survey is concerned with data augmentation methods for textual classification and aims to achieve a concise and comprehensive overview for researchers and practitioners (C3). Derived from the taxonomy, we divided more than 100 methods into 12 different groupings and provide state-of-the-art references expounding which methods are highly promising (C4). Finally, research perspectives that may constitute a building block for future work are given (C5).

We address the task of automatically scoring the competency of candidates based on textual features, from the automatic speech recognition (ASR) transcriptions in the asynchronous video job interview (AVI). The key challenge is how to construct the dependency relation between questions and answers, and conduct the semantic level interaction for each question-answer (QA) pair. However, most of the recent studies in AVI focus on how to represent questions and answers better, but ignore the dependency information and interaction between them, which is critical for QA evaluation. In this work, we propose a Hierarchical Reasoning Graph Neural Network (HRGNN) for the automatic assessment of question-answer pairs. Specifically, we construct a sentence-level relational graph neural network to capture the dependency information of sentences in or between the question and the answer. Based on these graphs, we employ a semantic-level reasoning graph attention network to model the interaction states of the current QA session. Finally, we propose a gated recurrent unit encoder to represent the temporal question-answer pairs for the final prediction. Empirical results conducted on CHNAT (a real-world dataset) validate that our proposed model significantly outperforms text-matching based benchmark models. Ablation studies and experimental results with 10 random seeds also show the effectiveness and stability of our models.

We propose a novel method for automatic reasoning on knowledge graphs based on debate dynamics. The main idea is to frame the task of triple classification as a debate game between two reinforcement learning agents which extract arguments -- paths in the knowledge graph -- with the goal to promote the fact being true (thesis) or the fact being false (antithesis), respectively. Based on these arguments, a binary classifier, called the judge, decides whether the fact is true or false. The two agents can be considered as sparse, adversarial feature generators that present interpretable evidence for either the thesis or the antithesis. In contrast to other black-box methods, the arguments allow users to get an understanding of the decision of the judge. Since the focus of this work is to create an explainable method that maintains a competitive predictive accuracy, we benchmark our method on the triple classification and link prediction task. Thereby, we find that our method outperforms several baselines on the benchmark datasets FB15k-237, WN18RR, and Hetionet. We also conduct a survey and find that the extracted arguments are informative for users.

北京阿比特科技有限公司