亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a test for independence of two strictly stationary time series based on a bootstrap procedure for the distance covariance. Our test detects any kind of dependence between the two time series within an arbitrary maximum lag $L$. In simulation studies, our test outperforms alternative testing procedures. In proving the validity of the underlying bootstrap procedure, we generalise bounds for the Wasserstein distance between an empirical measure and its marginal distribution under the assumption of $\alpha$-mixing. Previous results of this kind only existed for i.i.d. processes.

相關內容

We present a method to create storytelling visualization with time series data. Many personal decisions nowadays rely on access to dynamic data regularly, as we have seen during the COVID-19 pandemic. It is thus desirable to construct storytelling visualization for dynamic data that is selected by an individual for a specific context. Because of the need to tell data-dependent stories, predefined storyboards based on known data cannot accommodate dynamic data easily nor scale up to many different individuals and contexts. Motivated initially by the need to communicate time series data during the COVID-19 pandemic, we developed a novel computer-assisted method for meta-authoring of stories, which enables the design of storyboards that include feature-action patterns in anticipation of potential features that may appear in dynamically arrived or selected data. In addition to meta-storyboards involving COVID-19 data, we also present storyboards for telling stories about progress in a machine learning workflow. Our approach is complementary to traditional methods for authoring storytelling visualization, and provides an efficient means to construct data-dependent storyboards for different data-streams of similar contexts.

The study of UX dark patterns, i.e., UI designs that seek to manipulate user behaviors, often for the benefit of online services, has drawn significant attention in the CHI and CSCW communities in recent years. To complement previous studies in addressing dark patterns from (1) the designer's perspective on education and advocacy for ethical designs; and (2) the policymaker's perspective on new regulations, we propose an end-user-empowerment intervention approach that helps users (1) raise the awareness of dark patterns and understand their underlying design intents; (2) take actions to counter the effects of dark patterns using a web augmentation approach. Through a two-phase co-design study, including 5 co-design workshops (N=12) and a 2-week technology probe study (N=15), we reported findings on the understanding of users' needs, preferences, and challenges in handling dark patterns and investigated the feedback and reactions to users' awareness of and action on dark patterns being empowered in a realistic in-situ setting.

In the evolving landscape of online communication, moderating hate speech (HS) presents an intricate challenge, compounded by the multimodal nature of digital content. This comprehensive survey delves into the recent strides in HS moderation, spotlighting the burgeoning role of large language models (LLMs) and large multimodal models (LMMs). Our exploration begins with a thorough analysis of current literature, revealing the nuanced interplay between textual, visual, and auditory elements in propagating HS. We uncover a notable trend towards integrating these modalities, primarily due to the complexity and subtlety with which HS is disseminated. A significant emphasis is placed on the advances facilitated by LLMs and LMMs, which have begun to redefine the boundaries of detection and moderation capabilities. We identify existing gaps in research, particularly in the context of underrepresented languages and cultures, and the need for solutions to handle low-resource settings. The survey concludes with a forward-looking perspective, outlining potential avenues for future research, including the exploration of novel AI methodologies, the ethical governance of AI in moderation, and the development of more nuanced, context-aware systems. This comprehensive overview aims to catalyze further research and foster a collaborative effort towards more sophisticated, responsible, and human-centric approaches to HS moderation in the digital era. WARNING: This paper contains offensive examples.

Event detection is a crucial information extraction task in many domains, such as Wikipedia or news. The task typically relies on trigger detection (TD) -- identifying token spans in the text that evoke specific events. While the notion of triggers should ideally be universal across domains, domain transfer for TD from high- to low-resource domains results in significant performance drops. We address the problem of negative transfer in TD by coupling triggers between domains using subject-object relations obtained from a rule-based open information extraction (OIE) system. We demonstrate that OIE relations injected through multi-task training can act as mediators between triggers in different domains, enhancing zero- and few-shot TD domain transfer and reducing performance drops, in particular when transferring from a high-resource source domain (Wikipedia) to a low(er)-resource target domain (news). Additionally, we combine this improved transfer with masked language modeling on the target domain, observing further TD transfer gains. Finally, we demonstrate that the gains are robust to the choice of the OIE system.

We present a simple argument using Promise Theory and dimensional analysis for the Dunbar scaling hierarchy, supported by recent data from group formation in Wikipedia editing. We show how the assumption of a common priority seeds group alignment until the costs associated with attending to the group outweigh the benefits in a detailed balance scenario. Subject to partial efficiency of implementing promised intentions, we can reproduce a series of compatible rates that balance growth with entropy.

In this paper, the problem of low-latency communication and computation resource allocation for digital twin (DT) over wireless networks is investigated. In the considered model, multiple physical devices in the physical network (PN) needs to frequently offload the computation task related data to the digital network twin (DNT), which is generated and controlled by the central server. Due to limited energy budget of the physical devices, both computation accuracy and wireless transmission power must be considered during the DT procedure. This joint communication and computation problem is formulated as an optimization problem whose goal is to minimize the overall transmission delay of the system under total PN energy and DNT model accuracy constraints. To solve this problem, an alternating algorithm with iteratively solving device scheduling, power control, and data offloading subproblems. For the device scheduling subproblem, the optimal solution is obtained in closed form through the dual method. For the special case with one physical device, the optimal number of transmission times is reveled. Based on the theoretical findings, the original problem is transformed into a simplified problem and the optimal device scheduling can be found. Numerical results verify that the proposed algorithm can reduce the transmission delay of the system by up to 51.2\% compared to the conventional schemes.

We present a new methodology for handling AI errors by introducing weakly supervised AI error correctors with a priori performance guarantees. These AI correctors are auxiliary maps whose role is to moderate the decisions of some previously constructed underlying classifier by either approving or rejecting its decisions. The rejection of a decision can be used as a signal to suggest abstaining from making a decision. A key technical focus of the work is in providing performance guarantees for these new AI correctors through bounds on the probabilities of incorrect decisions. These bounds are distribution agnostic and do not rely on assumptions on the data dimension. Our empirical example illustrates how the framework can be applied to improve the performance of an image classifier in a challenging real-world task where training data are scarce.

This experience report outlines five tech transfer strategies developed over a period of 25 years at four Global 1000 companies (HP, Cisco, Qualcomm, and Nortel) to mitigate R&D challenges associated with duplicated effort, product quality, and time-to-market. The five strategies accelerate innovation through open knowledge sharing, rather than licensing intellectual property rights (IPR) such as patents, trade secrets, and copyrights. The strategies are based on corporate tech forums, conference panels, exploratory workshops, research reviews (at universities and companies), and talent exchanges. While the initial objective was to foster the corporate adoption of software best practices, over time the strategies had broader impact on company innovation, including incubating cross-company R&D collaborations, capturing organizational memory, cultivating and leveraging external research partnerships, and feeding company talent pipelines.

While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.

High spectral dimensionality and the shortage of annotations make hyperspectral image (HSI) classification a challenging problem. Recent studies suggest that convolutional neural networks can learn discriminative spatial features, which play a paramount role in HSI interpretation. However, most of these methods ignore the distinctive spectral-spatial characteristic of hyperspectral data. In addition, a large amount of unlabeled data remains an unexploited gold mine for efficient data use. Therefore, we proposed an integration of generative adversarial networks (GANs) and probabilistic graphical models for HSI classification. Specifically, we used a spectral-spatial generator and a discriminator to identify land cover categories of hyperspectral cubes. Moreover, to take advantage of a large amount of unlabeled data, we adopted a conditional random field to refine the preliminary classification results generated by GANs. Experimental results obtained using two commonly studied datasets demonstrate that the proposed framework achieved encouraging classification accuracy using a small number of data for training.

北京阿比特科技有限公司