亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A/B testing is critical for modern technological companies to evaluate the effectiveness of newly developed products against standard baselines. This paper studies optimal designs that aim to maximize the amount of information obtained from online experiments to estimate treatment effects accurately. We propose three optimal allocation strategies in a dynamic setting where treatments are sequentially assigned over time. These strategies are designed to minimize the variance of the treatment effect estimator when data follow a non-Markov decision process or a (time-varying) Markov decision process. We further develop estimation procedures based on existing off-policy evaluation (OPE) methods and conduct extensive experiments in various environments to demonstrate the effectiveness of the proposed methodologies. In theory, we prove the optimality of the proposed treatment allocation design and establish upper bounds for the mean squared errors of the resulting treatment effect estimators.

相關內容

The effectiveness of recommendation systems is pivotal to user engagement and satisfaction in online platforms. As these recommendation systems increasingly influence user choices, their evaluation transcends mere technical performance and becomes central to business success. This paper addresses the multifaceted nature of recommendation system evaluation by introducing a comprehensive suite of metrics, each tailored to capture a distinct aspect of system performance. We discuss similarity metrics that quantify the precision of content-based and collaborative filtering mechanisms, along with candidate generation metrics which measure how well the system identifies a broad yet pertinent range of items. Following this, we delve into predictive metrics that assess the accuracy of forecasted preferences, ranking metrics that evaluate the order in which recommendations are presented, and business metrics that align system performance with economic objectives. Our approach emphasizes the contextual application of these metrics and their interdependencies. In this paper, we identify the strengths and limitations of current evaluation practices and highlight the nuanced trade-offs that emerge when optimizing recommendation systems across different metrics. The paper concludes by proposing a framework for selecting and interpreting these metrics to not only improve system performance but also to advance business goals. This work is to aid researchers and practitioners in critically assessing recommendation systems and fosters the development of more nuanced, effective, and economically viable personalization strategies. Our code is available at GitHub - //github.com/aryan-jadon/Evaluation-Metrics-for-Recommendation-Systems.

With the increasing importance of video data in real-world applications, there is a rising need for efficient object detection methods that utilize temporal information. While existing video object detection (VOD) techniques employ various strategies to address this challenge, they typically depend on locally adjacent frames or randomly sampled images within a clip. Although recent Transformer-based VOD methods have shown promising results, their reliance on multiple inputs and additional network complexity to incorporate temporal information limits their practical applicability. In this paper, we propose a novel approach to single image object detection, called Context Enhanced TRansformer (CETR), by incorporating temporal context into DETR using a newly designed memory module. To efficiently store temporal information, we construct a class-wise memory that collects contextual information across data. Additionally, we present a classification-based sampling technique to selectively utilize the relevant memory for the current image. In the testing, We introduce a test-time memory adaptation method that updates individual memory functions by considering the test distribution. Experiments with CityCam and ImageNet VID datasets exhibit the efficiency of the framework on various video systems. The project page and code will be made available at: //ku-cvlab.github.io/CETR.

Scientific Information Extraction (ScientificIE) is a critical task that involves the identification of scientific entities and their relationships. The complexity of this task is compounded by the necessity for domain-specific knowledge and the limited availability of annotated data. Two of the most popular datasets for ScientificIE are SemEval-2018 Task-7 and SciERC. They have overlapping samples and differ in their annotation schemes, which leads to conflicts. In this study, we first introduced a novel approach based on multi-task learning to address label variations. We then proposed a soft labeling technique that converts inconsistent labels into probabilistic distributions. The experimental results demonstrated that the proposed method can enhance the model robustness to label noise and improve the end-to-end performance in both ScientificIE tasks. The analysis revealed that label variations can be particularly effective in handling ambiguous instances. Furthermore, the richness of the information captured by label variations can potentially reduce data size requirements. The findings highlight the importance of releasing variation labels and promote future research on other tasks in other domains. Overall, this study demonstrates the effectiveness of multi-task learning and the potential of label variations to enhance the performance of ScientificIE.

Reactive transport in permeable porous media is relevant for a variety of applications, but poses a significant challenge due to the range of length and time scales. Multiscale methods that aim to link microstructure with the macroscopic response of geo-materials have been developed, but require the repeated solution of the small-scale problem and provide the motivation for this work. We present an efficient computational method to study fluid flow and solute transport problems in periodic porous media. Fluid flow is governed by the Stokes equation, and the solute transport is governed by the advection-diffusion equation. We follow the accelerated computational micromechanics approach that leads to an iterative computational method where each step is either local or the solution of a Poisson's equation. This enables us to implement these methods on accelerators like graphics processing units (GPUs) and exploit their massively parallel architecture. We verify the approach by comparing the results against established computational methods and then demonstrate the accuracy, efficacy, and performance by studying various examples. This method efficiently calculates the effective transport properties for complex pore geometries.

Whereas distributed computing research has been very successful in exploring the solvability/impossibility border of distributed computing problems like consensus in representative classes of computing models with respect to model parameters like failure bounds, this is not the case for characterizing necessary and sufficient communication requirements. In this paper, we introduce network abstractions as a novel approach for modeling communication requirements in asynchronous distributed systems. A network abstraction of a run is a sequence of directed graphs on the set of processes, where the $i$-th graph defines the potential message chains that may arise in the $i$-th portion of the run. Formally, it is defined via associating (potential) message sending times with the corresponding message receiving times in a message schedule. Network abstractions allow to reason about the future causal cones that might arise in a run, hence also facilitate reasoning about liveness properties, and are inherently compatible with temporal epistemic reasoning frameworks. We demonstrate the utility of our approach by providing necessary and sufficient network abstractions for solving the canonical firing rebels with relay (FRR) problem, and variants thereof, in asynchronous systems with up to $f$ byzantine processes. FRR is not only a basic primitive in clock synchronization and consensus algorithms, but also integrates several distributed computing problems, namely triggering events, agreement and even stabilizing agreement, in a single problem instance.

Purpose: One of the major reasons that totally implantable cochlear microphones are not readily available is the lack of good implantable microphones. An implantable microphone has the potential to provide a range of benefits over external microphones for cochlear implant users including the filtering ability of the outer ear, cosmetics, and usability in all situations. This paper presents results from experiments in human cadaveric ears of a piezofilm microphone concept under development as a possible component of a future implantable microphone system for use with cochlear implants. This microphone is referred to here as a drum microphone (DrumMic) that senses the robust and predictable motion of the umbo, the tip of the malleus. Methods: The performance was measured of five DrumMics inserted in four different human cadaveric temporal bones. Sensitivity, linearity, bandwidth, and equivalent input noise were measured during these experiments using a sound stimulus and measurement setup. Results: The sensitivity of the DrumMics was found to be tightly clustered across different microphones and ears despite differences in umbo and middle ear anatomy. The DrumMics were shown to behave linearly across a large dynamic range (46 dB SPL to 100 dB SPL) across a wide bandwidth (100 Hz to 8 kHz). The equivalent input noise (0.1-10 kHz) of the DrumMic and amplifier referenced to the ear canal was measured to be 54 dB SPL and estimated to be 46 dB SPL after accounting for the pressure gain of the outer ear. Conclusion: The results demonstrate that the DrumMic behaves robustly across ears and fabrication. The equivalent input noise performance was shown to approach that of commercial hearing aid microphones. To advance this demonstration of the DrumMic concept to a future prototype implantable in humans, work on encapsulation, biocompatibility, connectorization will be required.

As software becomes increasingly complex and prone to vulnerabilities, automated vulnerability detection is critically important, yet challenging. Given the significant successes of large language models (LLMs) in various tasks, there is growing anticipation of their efficacy in vulnerability detection. However, a quantitative understanding of their potential in vulnerability detection is still missing. To bridge this gap, we introduce a comprehensive vulnerability benchmark VulBench. This benchmark aggregates high-quality data from a wide range of CTF (Capture-the-Flag) challenges and real-world applications, with annotations for each vulnerable function detailing the vulnerability type and its root cause. Through our experiments encompassing 16 LLMs and 6 state-of-the-art (SOTA) deep learning-based models and static analyzers, we find that several LLMs outperform traditional deep learning approaches in vulnerability detection, revealing an untapped potential in LLMs. This work contributes to the understanding and utilization of LLMs for enhanced software security.

This paper establishes the equivalence between Local Differential Privacy (LDP) and a global limit on learning any knowledge about an object. However, an output from an LDP query is not necessarily required to provide exact amount of knowledge equal to the upper bound of the learning limit. Since the amount of knowledge gain should be proportional to the incurred privacy loss, the traditional approach of using DP guarantee to measure privacy loss can occasionally overestimate the actual privacy loss. This is especially problematic in privacy accounting in LDP, where privacy loss is computed by accumulating the DP guarantees. To address this issue, this paper introduces the concept of \textit{realized privacy loss}, which measures the actual knowledge gained by the analyst after a query, as a more accurate measure of privacy loss. The realized privacy loss is integrated into the privacy accounting of fully adaptive composition, where an adversary adaptively selects queries based on previous results. Bayesian Privacy Filter is implemented to continually accept queries until the realized privacy loss of the composed queries equals the DP guarantee of the composition, allowing the full utilization of the privacy budget. Tracking the realized privacy loss during the composition is achieved through Bayesian Privacy Odometer, and the gap between the privacy budget and the realized privacy loss measures the leeway of the DP guarantee for future queries. A branch-and-bound method is devised to enable the Bayesian Privacy Filter to safeguard objects with continuous values. The Bayesian Privacy Filter is proven to be at least as efficient as the basic composition, and more efficient if the queries are privacy-loss compactible. Experimental results indicate that Bayesian Privacy Filter outperforms the basic composition by a factor of one to four when composing linear and logistic regressions.

Reasoning is a fundamental aspect of human intelligence that plays a crucial role in activities such as problem solving, decision making, and critical thinking. In recent years, large language models (LLMs) have made significant progress in natural language processing, and there is observation that these models may exhibit reasoning abilities when they are sufficiently large. However, it is not yet clear to what extent LLMs are capable of reasoning. This paper provides a comprehensive overview of the current state of knowledge on reasoning in LLMs, including techniques for improving and eliciting reasoning in these models, methods and benchmarks for evaluating reasoning abilities, findings and implications of previous research in this field, and suggestions on future directions. Our aim is to provide a detailed and up-to-date review of this topic and stimulate meaningful discussion and future work.

We address the task of automatically scoring the competency of candidates based on textual features, from the automatic speech recognition (ASR) transcriptions in the asynchronous video job interview (AVI). The key challenge is how to construct the dependency relation between questions and answers, and conduct the semantic level interaction for each question-answer (QA) pair. However, most of the recent studies in AVI focus on how to represent questions and answers better, but ignore the dependency information and interaction between them, which is critical for QA evaluation. In this work, we propose a Hierarchical Reasoning Graph Neural Network (HRGNN) for the automatic assessment of question-answer pairs. Specifically, we construct a sentence-level relational graph neural network to capture the dependency information of sentences in or between the question and the answer. Based on these graphs, we employ a semantic-level reasoning graph attention network to model the interaction states of the current QA session. Finally, we propose a gated recurrent unit encoder to represent the temporal question-answer pairs for the final prediction. Empirical results conducted on CHNAT (a real-world dataset) validate that our proposed model significantly outperforms text-matching based benchmark models. Ablation studies and experimental results with 10 random seeds also show the effectiveness and stability of our models.

北京阿比特科技有限公司