亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Time-Sensitive Question Answering (TSQA) demands the effective utilization of specific temporal contexts, encompassing multiple time-evolving facts, to address time-sensitive questions. This necessitates not only the parsing of temporal information within questions but also the identification and understanding of time-evolving facts to generate accurate answers. However, current large language models still have limited sensitivity to temporal information and their inadequate temporal reasoning capabilities.In this paper, we propose a novel framework that enhances temporal awareness and reasoning through Temporal Information-Aware Embedding and Granular Contrastive Reinforcement Learning. Experimental results on four TSQA datasets demonstrate that our framework significantly outperforms existing LLMs in TSQA tasks, marking a step forward in bridging the performance gap between machine and human temporal understanding and reasoning.

相關內容

自動問答(Question Answering, QA)是指利用計算機自動回答用戶所提出的問題以滿足用戶知識需求的任務。不同于現有搜索引擎,問答系統是信息服務的一種高級形式,系統返回用戶的不再是基于關鍵詞匹配排序的文檔列表,而是精準的自然語言答案。近年來,隨著人工智能的飛速發展,自動問答已經成為倍受關注且發展前景廣泛的研究方向。

知識薈萃

精品入門和進階教程、論文和代碼整理等

更多

查看相關VIP內容、論文、資訊等

In Industry 4.0 systems, a considerable number of resource-constrained Industrial Internet of Things (IIoT) devices engage in frequent data interactions due to the necessity for model training, which gives rise to concerns pertaining to security and privacy. In order to address these challenges, this paper considers a digital twin (DT) and blockchain-assisted federated learning (FL) scheme. To facilitate the FL process, we initially employ fog devices with abundant computational capabilities to generate DT for resource-constrained edge devices, thereby aiding them in local training. Subsequently, we formulate an FL delay minimization problem for FL, which considers both of model transmission time and synchronization time, also incorporates cooperative jamming to ensure secure synchronization of DT. To address this non-convex optimization problem, we propose a decomposition algorithm. In particular, we introduce upper limits on the local device training delay and the effects of aggregation jamming as auxiliary variables, thereby transforming the problem into a convex optimization problem that can be decomposed for independent solution. Finally, a blockchain verification mechanism is employed to guarantee the integrity of the model uploading throughout the FL process and the identities of the participants. The final global model is obtained from the verified local and global models within the blockchain through the application of deep learning techniques. The efficacy of our proposed cooperative interference-based FL process has been verified through numerical analysis, which demonstrates that the integrated DT blockchain-assisted FL scheme significantly outperforms the benchmark schemes in terms of execution time, block optimization, and accuracy.

The passive and frequency-flat reflection of IRS, as well as the high-dimensional IRS-reflected channels, have posed significant challenges for efficient IRS channel estimation, especially in wideband communication systems with significant multi-path channel delay spread. To address these challenges, we propose a novel neural network (NN)-empowered framework for IRS channel autocorrelation matrix estimation in wideband orthogonal frequency division multiplexing (OFDM) systems. This framework relies only on the easily accessible reference signal received power (RSRP) measurements at users in existing wideband communication systems, without requiring additional pilot transmission. Based on the estimates of channel autocorrelation matrix, the passive reflection of IRS is optimized to maximize the average user received signal-to-noise ratio (SNR) over all subcarriers in the OFDM system. Numerical results verify that the proposed algorithm significantly outperforms existing powermeasurement-based IRS reflection designs in wideband channels.

We show through numerical simulation that the Quantum Approximate Optimization Algorithm (QAOA) for higher-order, random-coefficient, heavy-hex compatible spin glass Ising models has strong parameter concentration across problem sizes from $16$ up to $127$ qubits for $p=1$ up to $p=5$, which allows for straight-forward transfer learning of QAOA angles on instance sizes where exhaustive grid-search is prohibitive even for $p>1$. We use Matrix Product State (MPS) simulation at different bond dimensions to obtain confidence in these results, and we obtain the optimal solutions to these combinatorial optimization problems using CPLEX. In order to assess the ability of current noisy quantum hardware to exploit such parameter concentration, we execute short-depth QAOA circuits (with a CNOT depth of 6 per $p$, resulting in circuits which contain $1420$ two qubit gates for $127$ qubit $p=5$ QAOA) on $100$ higher-order (cubic term) Ising models on IBM quantum superconducting processors with $16, 27, 127$ qubits using QAOA angles learned from a single $16$-qubit instance. We show that (i) the best quantum processors generally find lower energy solutions up to $p=3$ for 27 qubit systems and up to $p=2$ for 127 qubit systems and are overcome by noise at higher values of $p$, (ii) the best quantum processors find mean energies that are about a factor of two off from the noise-free numerical simulation results. Additional insights from our experiments are that large performance differences exist among different quantum processors even of the same generation and that dynamical decoupling significantly improve performance for some, but decrease performance for other quantum processors. Lastly we show $p=1$ QAOA angle mean energy landscapes computed using up to a $414$ qubit quantum computer, showing that the mean QAOA energy landscapes remain very similar as the problem size changes.

Deploying Convolutional Neural Networks (CNNs) on resource-constrained devices necessitates efficient management of computational resources, often via distributed systems susceptible to latency from straggler nodes. This paper introduces the Flexible Coded Distributed Convolution Computing (FCDCC) framework to enhance fault tolerance and numerical stability in distributed CNNs. We extend Coded Distributed Computing (CDC) with Circulant and Rotation Matrix Embedding (CRME) which was originally proposed for matrix multiplication to high-dimensional tensor convolution. For the proposed scheme, referred to as Numerically Stable Coded Tensor Convolution (NSCTC) scheme, we also propose two new coded partitioning schemes: Adaptive-Padding Coded Partitioning (APCP) for input tensor and Kernel-Channel Coded Partitioning (KCCP) for filter tensor. These strategies enable linear decomposition of tensor convolutions and encoding them into CDC sub-tasks, combining model parallelism with coded redundancy for robust and efficient execution. Theoretical analysis identifies an optimal trade-off between communication and storage costs. Empirical results validate the framework's effectiveness in computational efficiency, fault tolerance, and scalability across various CNN architectures.

The safe and effective deployment of Large Language Models (LLMs) involves a critical step called alignment, which ensures that the model's responses are in accordance with human preferences. Prevalent alignment techniques, such as DPO, PPO and their variants, align LLMs by changing the pre-trained model weights during a phase called post-training. While predominant, these post-training methods add substantial complexity before LLMs can be deployed. Inference-time alignment methods avoid the complex post-training step and instead bias the generation towards responses that are aligned with human preferences. The best-known inference-time alignment method, called Best-of-N, is as effective as the state-of-the-art post-training procedures. Unfortunately, Best-of-N requires vastly more resources at inference time than standard decoding strategies, which makes it computationally not viable. In this work, we introduce Speculative Rejection, a computationally-viable inference-time alignment algorithm. It generates high-scoring responses according to a given reward model, like Best-of-N does, while being between 16 to 32 times more computationally efficient.

The military is investigating methods to improve communication and agility in its multi-domain operations (MDO). Nascent popularity of Internet of Things (IoT) has gained traction in public and government domains. Its usage in MDO may revolutionize future battlefields and may enable strategic advantage. While this technology offers leverage to military capabilities, it comes with challenges where one is the uncertainty and associated risk. A key question is how can these uncertainties be addressed. Recently published studies proposed information camouflage to transform information from one data domain to another. As this is comparatively a new approach, we investigate challenges of such transformations and how these associated uncertainties can be detected and addressed, specifically unknown-unknowns to improve decision-making.

Few-shot Knowledge Graph (KG) completion is a focus of current research, where each task aims at querying unseen facts of a relation given its few-shot reference entity pairs. Recent attempts solve this problem by learning static representations of entities and references, ignoring their dynamic properties, i.e., entities may exhibit diverse roles within task relations, and references may make different contributions to queries. This work proposes an adaptive attentional network for few-shot KG completion by learning adaptive entity and reference representations. Specifically, entities are modeled by an adaptive neighbor encoder to discern their task-oriented roles, while references are modeled by an adaptive query-aware aggregator to differentiate their contributions. Through the attention mechanism, both entities and references can capture their fine-grained semantic meanings, and thus render more expressive representations. This will be more predictive for knowledge acquisition in the few-shot scenario. Evaluation in link prediction on two public datasets shows that our approach achieves new state-of-the-art results with different few-shot sizes.

Domain shift is a fundamental problem in visual recognition which typically arises when the source and target data follow different distributions. The existing domain adaptation approaches which tackle this problem work in the closed-set setting with the assumption that the source and the target data share exactly the same classes of objects. In this paper, we tackle a more realistic problem of open-set domain shift where the target data contains additional classes that are not present in the source data. More specifically, we introduce an end-to-end Progressive Graph Learning (PGL) framework where a graph neural network with episodic training is integrated to suppress underlying conditional shift and adversarial learning is adopted to close the gap between the source and target distributions. Compared to the existing open-set adaptation approaches, our approach guarantees to achieve a tighter upper bound of the target error. Extensive experiments on three standard open-set benchmarks evidence that our approach significantly outperforms the state-of-the-arts in open-set domain adaptation.

Dynamic programming (DP) solves a variety of structured combinatorial problems by iteratively breaking them down into smaller subproblems. In spite of their versatility, DP algorithms are usually non-differentiable, which hampers their use as a layer in neural networks trained by backpropagation. To address this issue, we propose to smooth the max operator in the dynamic programming recursion, using a strongly convex regularizer. This allows to relax both the optimal value and solution of the original combinatorial problem, and turns a broad class of DP algorithms into differentiable operators. Theoretically, we provide a new probabilistic perspective on backpropagating through these DP operators, and relate them to inference in graphical models. We derive two particular instantiations of our framework, a smoothed Viterbi algorithm for sequence prediction and a smoothed DTW algorithm for time-series alignment. We showcase these instantiations on two structured prediction tasks and on structured and sparse attention for neural machine translation.

Multi-relation Question Answering is a challenging task, due to the requirement of elaborated analysis on questions and reasoning over multiple fact triples in knowledge base. In this paper, we present a novel model called Interpretable Reasoning Network that employs an interpretable, hop-by-hop reasoning process for question answering. The model dynamically decides which part of an input question should be analyzed at each hop; predicts a relation that corresponds to the current parsed results; utilizes the predicted relation to update the question representation and the state of the reasoning process; and then drives the next-hop reasoning. Experiments show that our model yields state-of-the-art results on two datasets. More interestingly, the model can offer traceable and observable intermediate predictions for reasoning analysis and failure diagnosis.

北京阿比特科技有限公司