亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Over the last 30 years, the World Wide Web has changed significantly. In this paper, we argue that common practices to prepare web pages for delivery conflict with many efforts to present content with minimal latency, one fundamental goal that pushed changes in the WWW. To bolster our arguments, we revisit reasons that led to changes of HTTP and compare them systematically with techniques to prepare web pages. We found that the structure of many web pages leverages features of HTTP/1.1 but hinders the use of recent HTTP features to present content quickly. To improve the situation in the future, we propose fine-grained content segmentation. This would allow to exploit streaming capabilities of recent HTTP versions and to render content as quickly as possible without changing underlying protocols or web browsers.

相關內容

In this paper, we demonstrate how Large Language Models (LLMs) can effectively learn to use an off-the-shelf information retrieval (IR) system specifically when additional context is required to answer a given question. Given the performance of IR systems, the optimal strategy for question answering does not always entail external information retrieval; rather, it often involves leveraging the parametric memory of the LLM itself. Prior research has identified this phenomenon in the PopQA dataset, wherein the most popular questions are effectively addressed using the LLM's parametric memory, while less popular ones require IR system usage. Following this, we propose a tailored training approach for LLMs, leveraging existing open-domain question answering datasets. Here, LLMs are trained to generate a special token, <RET>, when they do not know the answer to a question. Our evaluation of the Adaptive Retrieval LLM (Adapt-LLM) on the PopQA dataset showcases improvements over the same LLM under three configurations: (i) retrieving information for all the questions, (ii) using always the parametric memory of the LLM, and (iii) using a popularity threshold to decide when to use a retriever. Through our analysis, we demonstrate that Adapt-LLM is able to generate the <RET> token when it determines that it does not know how to answer a question, indicating the need for IR, while it achieves notably high accuracy levels when it chooses to rely only on its parametric memory.

Inspired by the great potential of Large Language Models (LLMs) for solving complex coding tasks, in this paper, we propose a novel approach, named Code2API, to automatically perform APIzation for Stack Overflow code snippets. Code2API does not require additional model training or any manual crafting rules and can be easily deployed on personal computers without relying on other external tools. Specifically, Code2API guides the LLMs through well-designed prompts to generate well-formed APIs for given code snippets. To elicit knowledge and logical reasoning from LLMs, we used chain-of-thought (CoT) reasoning and few-shot in-context learning, which can help the LLMs fully understand the APIzation task and solve it step by step in a manner similar to a developer. Our evaluations show that Code2API achieves a remarkable accuracy in identifying method parameters (65%) and return statements (66%) equivalent to human-generated ones, surpassing the current state-of-the-art approach, APIzator, by 15.0% and 16.5% respectively. Moreover, compared with APIzator, our user study demonstrates that Code2API exhibits superior performance in generating meaningful method names, even surpassing the human-level performance, and developers are more willing to use APIs generated by our approach, highlighting the applicability of our tool in practice. Finally, we successfully extend our framework to the Python dataset, achieving a comparable performance with Java, which verifies the generalizability of our tool.

In this paper, we introduce LGTM, a novel Local-to-Global pipeline for Text-to-Motion generation. LGTM utilizes a diffusion-based architecture and aims to address the challenge of accurately translating textual descriptions into semantically coherent human motion in computer animation. Specifically, traditional methods often struggle with semantic discrepancies, particularly in aligning specific motions to the correct body parts. To address this issue, we propose a two-stage pipeline to overcome this challenge: it first employs large language models (LLMs) to decompose global motion descriptions into part-specific narratives, which are then processed by independent body-part motion encoders to ensure precise local semantic alignment. Finally, an attention-based full-body optimizer refines the motion generation results and guarantees the overall coherence. Our experiments demonstrate that LGTM gains significant improvements in generating locally accurate, semantically-aligned human motion, marking a notable advancement in text-to-motion applications. Code and data for this paper are available at //github.com/L-Sun/LGTM

In this paper, we explore an efficient uncoupled unsourced random access (UURA) scheme for 6G massive communication. UURA is a typical framework of unsourced random access that addresses the problems of codeword detection and message stitching, without the use of check bits. Firstly, we establish a framework for UURA, allowing for immediate decoding of sub-messages upon arrival. Thus, the processing delay is effectively reduced due to the decreasing waiting time. Next, we propose an integrated decoding algorithm for sub-messages by leveraging matrix information geometry (MIG) theory. Specifically, MIG is applied to measure the feature similarities of codewords belonging to the same user equipment, and thus sub-message can be stitched once it is received. This enables the timely recovery of a portion of the original message by simultaneously detecting and stitching codewords within the current sub-slot. Furthermore, we analyze the performance of the proposed integrated decoding-based UURA scheme in terms of computational complexity and convergence rate. Finally, we present extensive simulation results to validate the effectiveness of the proposed scheme in 6G wireless networks.

In this paper, we propose modelling human translation production as a hierarchy of three embedded translation processes. The proposed architecture replicates the temporal dynamics of keystroke production across sensorimotor, cognitive, and phenomenal layers. Utilizing data from the CRITT TPR-DB, the Task Segment Framework, and the HOF taxonomy, we demonstrate the temporal breakdown of the typing flow on distinct timelines within these three layers.

In this paper, we review recent approaches for explaining concepts in neural networks. Concepts can act as a natural link between learning and reasoning: once the concepts are identified that a neural learning system uses, one can integrate those concepts with a reasoning system for inference or use a reasoning system to act upon them to improve or enhance the learning system. On the other hand, knowledge can not only be extracted from neural networks but concept knowledge can also be inserted into neural network architectures. Since integrating learning and reasoning is at the core of neuro-symbolic AI, the insights gained from this survey can serve as an important step towards realizing neuro-symbolic AI based on explainable concepts.

In this paper, we show how Federated Learning (FL) can be applied to vehicular use-cases in which we seek to classify obstacles, irregularities and pavement types on roads. Our proposed framework utilizes FL and TabNet, a state-of-the-art neural network for tabular data. We are the first to demonstrate how TabNet can be integrated with FL. Moreover, we achieve a maximum test accuracy of 93.6%. Finally, we reason why FL is a suitable concept for this data set.

In this paper we look at $k$-stroll, point-to-point orienteering, as well as the deadline TSP problem on graphs with bounded doubling dimension and bounded treewidth and present approximation schemes for them. Given a weighted graph $G=(V,E)$, start node $s\in V$, distances $d:E\rightarrow \mathbb{Q}^+$ and integer $k$. In the $k$-stroll problem the goal is to find a path starting at $s$ of minimum length that visits at least $k$ vertices. The dual problem to $k$-stroll is the rooted orienteering in which instead of $k$ we are given a budget $B$ and the goal is to find a walk of length at most $B$ starting at $s$ that visits as many vertices as possible. In the P2P orienteering we are given start and end nodes $s,t$ for the path. In the deadline TSP we are given a deadline $D(v)$ for each $v\in V$ and the goal is to find a walk starting at $s$ that visits as many vertices as possible before their deadline. The best approximation for rooted or P2P orienteering is $(2+\epsilon)$-approximation [12] and $O(\log n)$-approximation for deadline TSP [3]. There is no known approximation scheme for deadline TSP for any metric (not even trees). Our main result is the first approximation scheme for deadline TSP on metrics with bounded doubling dimension. To do so we first show if $G$ is a metric with doubling dimension $\kappa$ and aspect ratio $\Delta$, there is a $(1+\epsilon)$-approximation that runs in time $n^{O\left(\left(\log\Delta/\epsilon\right)^{2\kappa+1}\right)}$. We then extend these to obtain an approximation scheme for deadline TSP when the distances and deadlines are integer which runs in time $n^{O\left(\left(\log \Delta/\epsilon\right)^{2\kappa+2}\right)}$. For graphs with treewidth $\omega$ we show how to solve $k$-stroll and P2P orienteering exactly in polynomial time and a $(1+\epsilon)$-approximation for deadline TSP in time $n^{O((\omega\log\Delta/\epsilon)^2)}$.

This article presents the affordances that Generative Artificial Intelligence can have in disinformation context, one of the major threats to our digitalized society. We present a research framework to generate customized agent-based social networks for disinformation simulations that would enable understanding and evaluation of the phenomena whilst discussing open challenges.

Salient object detection is a problem that has been considered in detail and many solutions proposed. In this paper, we argue that work to date has addressed a problem that is relatively ill-posed. Specifically, there is not universal agreement about what constitutes a salient object when multiple observers are queried. This implies that some objects are more likely to be judged salient than others, and implies a relative rank exists on salient objects. The solution presented in this paper solves this more general problem that considers relative rank, and we propose data and metrics suitable to measuring success in a relative objects saliency landscape. A novel deep learning solution is proposed based on a hierarchical representation of relative saliency and stage-wise refinement. We also show that the problem of salient object subitizing can be addressed with the same network, and our approach exceeds performance of any prior work across all metrics considered (both traditional and newly proposed).

北京阿比特科技有限公司