亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Our ISCA 2013 paper provides a fundamental empirical understanding of two major factors that make it very difficult to determine the minimum data retention time of a DRAM cell, based on the first comprehensive experimental characterization of retention time behavior of a large number of modern commodity DRAM chips from 5 major vendors. We study the prevalence, effects, and technology scaling characteristics of two significant phenomena: 1) data pattern dependence (DPD), where the minimum retention time of a DRAM cell is affected by data stored in other DRAM cells, and 2) variable retention time (VRT), where the minimum retention time of a DRAM cell changes unpredictably over time. To this end, we built a flexible FPGA-based testing infrastructure to test DRAM chips, which has enabled a large amount of further experimental research in DRAM. Our ISCA 2013 paper's results using this infrastructure clearly demonstrate that DPD and VRT phenomena are significant issues that must be addressed for correct operation in DRAM-based systems and their effects are getting worse as DRAM scales to smaller technology node sizes. Our work also provides ideas on how to accurately identify data retention times in the presence of DPD and VRT, e.g., online profiling with error correcting codes, which later works examined and enabled. Most modern DRAM chips now incorporate ECC, especially to account for VRT effects. This short retrospective provides a brief analysis of our ISCA 2013 paper and its impact. We describe why we did the work, what we found and its implications, what the findings as well as the infrastructure we built to discover them have enabled in later works, and our thoughts on what the future may bring.

相關內容

分(fen)(fen)布式(shi)并行數(shu)據庫(ku)(DPD)在(zai)所(suo)有傳(chuan)統的(de)(de)以及新興(xing)的(de)(de)數(shu)據庫(ku)研究領(ling)域中發(fa)表論文(wen),包括(kuo):數(shu)據集成、數(shu)據共享、安全和(he)隱私、事務管理、流程和(he)工作流管理、信息提取、查詢處理和(he)優(you)化、分(fen)(fen)析大型數(shu)據集的(de)(de)挖(wa)掘和(he)可視化、存儲、數(shu)據碎片(pian),放(fang)置和(he)分(fen)(fen)配復制(zhi)協議(yi)、可靠(kao)性(xing)(xing)、容(rong)錯、持久性(xing)(xing)、保留、性(xing)(xing)能和(he)可伸縮性(xing)(xing)以及各種通(tong)信和(he)傳(chuan)播(bo)平臺及中間(jian)件的(de)(de)使用。 官網地址:

Recent advances in Neural Radiance Fields (NeRFs) treat the problem of novel view synthesis as Sparse Radiance Field (SRF) optimization using sparse voxels for efficient and fast rendering (plenoxels,InstantNGP). In order to leverage machine learning and adoption of SRFs as a 3D representation, we present SPARF, a large-scale ShapeNet-based synthetic dataset for novel view synthesis consisting of $\sim$ 17 million images rendered from nearly 40,000 shapes at high resolution (400 X 400 pixels). The dataset is orders of magnitude larger than existing synthetic datasets for novel view synthesis and includes more than one million 3D-optimized radiance fields with multiple voxel resolutions. Furthermore, we propose a novel pipeline (SuRFNet) that learns to generate sparse voxel radiance fields from only few views. This is done by using the densely collected SPARF dataset and 3D sparse convolutions. SuRFNet employs partial SRFs from few/one images and a specialized SRF loss to learn to generate high-quality sparse voxel radiance fields that can be rendered from novel views. Our approach achieves state-of-the-art results in the task of unconstrained novel view synthesis based on few views on ShapeNet as compared to recent baselines. The SPARF dataset is made public with the code and models on the project website //abdullahamdi.com/sparf/ .

Large Language Models (LLMs) have showcased impressive reasoning capabilities, particularly when guided by specifically designed prompts in complex reasoning tasks such as math word problems. These models typically solve tasks using a chain-of-thought approach, which not only bolsters their reasoning abilities but also provides valuable insights into their problem-solving process. However, there is still significant room for enhancing the reasoning abilities of LLMs. Some studies suggest that the integration of an LLM output verifier can boost reasoning accuracy without necessitating additional model training. In this paper, we follow these studies and introduce a novel graph-based method to further augment the reasoning capabilities of LLMs. We posit that multiple solutions to a reasoning task, generated by an LLM, can be represented as a reasoning graph due to the logical connections between intermediate steps from different reasoning paths. Therefore, we propose the Reasoning Graph Verifier (RGV) to analyze and verify the solutions generated by LLMs. By evaluating these graphs, models can yield more accurate and reliable results.Our experimental results show that our graph-based verification method not only significantly enhances the reasoning abilities of LLMs but also outperforms existing verifier methods in terms of improving these models' reasoning performance.

In the recent years cyberattacks to smart grids are becoming more frequent Among the many malicious activities that can be launched against smart grids False Data Injection FDI attacks have raised significant concerns from both academia and industry FDI attacks can affect the internal state estimation processcritical for smart grid monitoring and controlthus being able to bypass conventional Bad Data Detection BDD methods Hence prompt detection and precise localization of FDI attacks is becomming of paramount importance to ensure smart grids security and safety Several papers recently started to study and analyze this topic from different perspectives and address existing challenges Datadriven techniques and mathematical modelings are the major ingredients of the proposed approaches The primary objective of this work is to provide a systematic review and insights into FDI attacks joint detection and localization approaches considering that other surveys mainly concentrated on the detection aspects without detailed coverage of localization aspects For this purpose we select and inspect more than forty major research contributions while conducting a detailed analysis of their methodology and objectives in relation to the FDI attacks detection and localization We provide our key findings of the identified papers according to different criteria such as employed FDI attacks localization techniques utilized evaluation scenarios investigated FDI attack types application scenarios adopted methodologies and the use of additional data Finally we discuss open issues and future research directions

In recent years, metaverse and digital humans have become important research and industry areas of focus. However, existing digital humans still lack realistic affective traits, making emotional interaction with humans difficult. Grounded in the developments of artificial intelligence, human-computer interaction, virtual reality, and affective computing, this paper proposes the concept and technical framework of "Affective Digital Twins for Digital Human" based on the philosophy of digital twin technology. The paper discusses several key technical issues including affective modeling, affective perception, affective encoding, and affective expression. Based on this, the paper conducts a preliminary imagination of the future application prospects of affective digital twins for digital human, while considering potential problems that may need to be addressed.

This paper introduces TunesFormer, an efficient Transformer-based dual-decoder model specifically designed for the generation of melodies that adhere to user-defined musical forms. Trained on 214,122 Irish tunes, TunesFormer utilizes techniques including bar patching and control codes. Bar patching reduces sequence length and generation time, while control codes guide TunesFormer in producing melodies that conform to desired musical forms. Our evaluation demonstrates TunesFormer's superior efficiency, being 3.22 times faster than GPT-2 and 1.79 times faster than a model with linear complexity of equal scale while offering comparable performance in controllability and other metrics. TunesFormer provides a novel tool for musicians, composers, and music enthusiasts alike to explore the vast landscape of Irish music. Our model and code are available at //github.com/sander-wood/tunesformer.

This paper studies the spatial manifestations of order reduction that occur when time-stepping initial-boundary-value problems (IBVPs) with high-order Runge-Kutta methods. For such IBVPs, geometric structures arise that do not have an analog in ODE IVPs: boundary layers appear, induced by a mismatch between the approximation error in the interior and at the boundaries. To understand those boundary layers, an analysis of the modes of the numerical scheme is conducted, which explains under which circumstances boundary layers persist over many time steps. Based on this, two remedies to order reduction are studied: first, a new condition on the Butcher tableau, called weak stage order, that is compatible with diagonally implicit Runge-Kutta schemes; and second, the impact of modified boundary conditions on the boundary layer theory is analyzed.

Machine Reading Comprehension (MRC) models tend to take advantage of spurious correlations (also known as dataset bias or annotation artifacts in the research community). Consequently, these models may perform the MRC task without fully comprehending the given context and question, which is undesirable since it may result in low robustness against distribution shift. This paper delves into the concept of answer-position bias, where a significant percentage of training questions have answers located solely in the first sentence of the context. We propose a Single-Sentence Reader as a new approach for addressing answer position bias in MRC. We implement this approach using six different models and thoroughly analyze their performance. Remarkably, our proposed Single-Sentence Readers achieve results that nearly match those of models trained on conventional training sets, proving their effectiveness. Our study also discusses several challenges our Single-Sentence Readers encounter and proposes a potential solution.

This paper introduces a novel state estimation framework for robots using differentiable ensemble Kalman filters (DEnKF). DEnKF is a reformulation of the traditional ensemble Kalman filter that employs stochastic neural networks to model the process noise implicitly. Our work is an extension of previous research on differentiable filters, which has provided a strong foundation for our modular and end-to-end differentiable framework. This framework enables each component of the system to function independently, leading to improved flexibility and versatility in implementation. Through a series of experiments, we demonstrate the flexibility of this model across a diverse set of real-world tracking tasks, including visual odometry and robot manipulation. Moreover, we show that our model effectively handles noisy observations, is robust in the absence of observations, and outperforms state-of-the-art differentiable filters in terms of error metrics. Specifically, we observe a significant improvement of at least 59% in translational error when using DEnKF with noisy observations. Our results underscore the potential of DEnKF in advancing state estimation for robotics. Code for DEnKF is available at //github.com/ir-lab/DEnKF

This paper investigates the performance of the Large Language Models (LLMs) ChatGPT-3.5 and GPT-4 in solving introductory programming tasks. Based on the performance, implications for didactic scenarios and assessment formats utilizing LLMs are derived. For the analysis, 72 Python tasks for novice programmers were selected from the free site CodingBat. Full task descriptions were used as input to the LLMs, while the generated replies were evaluated using CodingBat's unit tests. In addition, the general availability of textual explanations and program code was analyzed. The results show high scores of 94.4 to 95.8% correct responses and reliable availability of textual explanations and program code, which opens new ways to incorporate LLMs into programming education and assessment.

This paper presents an exhaustive quantitative and qualitative evaluation of Large Language Models (LLMs) for Knowledge Graph (KG) construction and reasoning. We employ eight distinct datasets that encompass aspects including entity, relation and event extraction, link prediction, and question answering. Empirically, our findings suggest that GPT-4 outperforms ChatGPT in the majority of tasks and even surpasses fine-tuned models in certain reasoning and question-answering datasets. Moreover, our investigation extends to the potential generalization ability of LLMs for information extraction, which culminates in the presentation of the Virtual Knowledge Extraction task and the development of the VINE dataset. Drawing on these empirical findings, we further propose AutoKG, a multi-agent-based approach employing LLMs for KG construction and reasoning, which aims to chart the future of this field and offer exciting opportunities for advancement. We anticipate that our research can provide invaluable insights for future undertakings of KG\footnote{Code and datasets will be available in //github.com/zjunlp/AutoKG.

北京阿比特科技有限公司