亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The persistent issue of wrongful convictions in the United States emphasizes the need for scrutiny and improvement of the criminal justice system. While statistical methods for the evaluation of forensic evidence, including glass, fingerprints, and DNA, have significantly contributed to solving intricate crimes, there is a notable lack of national-level standards to ensure the appropriate application of statistics in forensic investigations. We discuss the obstacles in the application of statistics in court, and emphasize the importance of making statistical interpretation accessible to non-statisticians, especially those who make decisions about potentially innocent individuals. We investigate the use and misuse of statistical methods in crime investigations, in particular the likelihood ratio approach. We further describe the use of graphical models, where hypotheses and evidence can be represented as nodes connected by arrows signifying association or causality. We emphasize the advantages of special graph structures, such as object-oriented Bayesian networks and chain event graphs, which allow for the concurrent examination of evidence of various nature.

相關內容

To address the increasing complexity and frequency of cybersecurity incidents emphasized by the recent cybersecurity threat reports with over 10 billion instances, cyber threat intelligence (CTI) plays a critical role in the modern cybersecurity landscape by offering the insights required to understand and combat the constantly evolving nature of cyber threats. Inspired by the powerful capability of large language models (LLMs) in handling complex tasks, in this paper, we introduce a framework to benchmark, elicit, and improve cybersecurity incident analysis and response abilities in LLMs for Security Events (SEvenLLM). Specifically, we create a high-quality bilingual instruction corpus by crawling cybersecurity raw text from cybersecurity websites to overcome the lack of effective data for information extraction. Then, we design a pipeline to auto-select tasks from the tasks pool and convert the raw text into supervised corpora comprised of question and response. The instruction dataset SEvenLLM-Instruct is used to train cybersecurity LLMs with the multi-task learning objective (27 well-designed tasks) for augmenting the analysis of cybersecurity events. Extensive experiments in our curated benchmark (SEvenLLM-bench) demonstrate that SEvenLLM performs more sophisticated threat analysis and fortifies defenses against the evolving landscape of cyber threats.

The adaptive cubic regularization algorithm employing the inexact gradient and Hessian is proposed on general Riemannian manifolds, together with the iteration complexity to get an approximate second-order optimality under certain assumptions on accuracies about the inexact gradient and Hessian. The algorithm extends the inexact adaptive cubic regularization algorithm under true gradient in [Math. Program., 184(1-2): 35-70, 2020] to more general cases even in Euclidean settings. As an application, the algorithm is applied to solve the joint diagonalization problem on the Stiefel manifold. Numerical experiments illustrate that the algorithm performs better than the inexact trust-region algorithm in [Advances of the neural information processing systems, 31, 2018].

Diagnosing language disorders associated with autism is a complex and nuanced challenge, often hindered by the subjective nature and variability of traditional assessment methods. Traditional diagnostic methods not only require intensive human effort but also often result in delayed interventions due to their lack of speed and specificity. In this study, we explored the application of ChatGPT, a state of the art large language model, to overcome these obstacles by enhancing diagnostic accuracy and profiling specific linguistic features indicative of autism. Leveraging ChatGPT advanced natural language processing capabilities, this research aims to streamline and refine the diagnostic process. Specifically, we compared ChatGPT's performance with that of conventional supervised learning models, including BERT, a model acclaimed for its effectiveness in various natural language processing tasks. We showed that ChatGPT substantially outperformed these models, achieving over 13% improvement in both accuracy and F1 score in a zero shot learning configuration. This marked enhancement highlights the model potential as a superior tool for neurological diagnostics. Additionally, we identified ten distinct features of autism associated language disorders that vary significantly across different experimental scenarios. These features, which included echolalia, pronoun reversal, and atypical language usage, were crucial for accurately diagnosing ASD and customizing treatment plans. Together, our findings advocate for adopting sophisticated AI tools like ChatGPT in clinical settings to assess and diagnose developmental disorders. Our approach not only promises greater diagnostic precision but also aligns with the goals of personalized medicine, potentially transforming the evaluation landscape for autism and similar neurological conditions.

Mission critical communication (MCC) involves the exchange of information and data among emergency services, including the police, fire brigade, and other first responders, particularly during emergencies, disasters, or critical incidents. The widely-adopted TETRA (Terrestrial Trunked Radio)-based communication for mission critical services faces challenges including limited data capacity, coverage limitations, spectrum congestion, and security concerns. Therefore, as an alternative, mission critical communication over cellular networks (4G and 5G) has emerged. While cellular-based MCC enables features like real-time video streaming and high-speed data transmission, the involvement of network operators and application service providers in the MCC architecture raises privacy concerns for mission critical users and services. For instance, the disclosure of a policeman's location details to the network operator raises privacy concerns. To the best of our knowledge, no existing work considers the privacy issues in mission critical system with respect to 5G and upcoming technologies. Therefore, in this paper, we analyse the 3GPP standardised MCC architecture within the context of 5G core network concepts and assess the privacy implications for MC users, network entities, and MC servers. The privacy analysis adheres to the deployment strategies in the standard for MCC. Additionally, we explore emerging 6G technologies, such as off-network communications, joint communication and sensing, and non-3GPP communications, to identify privacy challenges in MCC architecture. Finally, we propose privacy controls to establish a next-generation privacy-preserving MCC architecture.

We experimentally demonstrate the effects of read disturbance (RowHammer and RowPress) and uncover the inner workings of undocumented read disturbance defense mechanisms in High Bandwidth Memory (HBM). Detailed characterization of six real HBM2 DRAM chips in two different FPGA boards shows that (1) the read disturbance vulnerability significantly varies between different HBM2 chips and between different components (e.g., 3D-stacked channels) inside a chip, (2) DRAM rows at the end and in the middle of a bank are more resilient to read disturbance, (3) fewer additional activations are sufficient to induce more read disturbance bitflips in a DRAM row if the row exhibits the first bitflip at a relatively high activation count, (4) a modern HBM2 chip implements undocumented read disturbance defenses that track potential aggressor rows based on how many times they are activated. We describe how our findings could be leveraged to develop more powerful read disturbance attacks and more efficient defense mechanisms. We open source all our code and data to facilitate future research at //github.com/CMU-SAFARI/HBM-Read-Disturbance.

Semantic communication, emerging as a breakthrough beyond the classical Shannon paradigm, aims to convey the essential meaning of source data rather than merely focusing on precise yet content-agnostic bit transmission. By interconnecting diverse intelligent agents (e.g., autonomous vehicles and VR devices) via semantic communications, the semantic communication networks (SemComNet) supports semantic-oriented transmission, efficient spectrum utilization, and flexible networking among collaborative agents. Consequently, SemComNet stands out for enabling ever-increasing intelligent applications, such as autonomous driving and Metaverse. However, being built on a variety of cutting-edge technologies including AI and knowledge graphs, SemComNet introduces diverse brand-new and unexpected threats, which pose obstacles to its widespread development. Besides, due to the intrinsic characteristics of SemComNet in terms of heterogeneous components, autonomous intelligence, and large-scale structure, a series of critical challenges emerge in securing SemComNet. In this paper, we provide a comprehensive and up-to-date survey of SemComNet from its fundamentals, security, and privacy aspects. Specifically, we first introduce a novel three-layer architecture of SemComNet for multi-agent interaction, which comprises the control layer, semantic transmission layer, and cognitive sensing layer. Then, we discuss its working modes and enabling technologies. Afterward, based on the layered architecture of SemComNet, we outline a taxonomy of security and privacy threats, while discussing state-of-the-art defense approaches. Finally, we present future research directions, clarifying the path toward building intelligent, robust, and green SemComNet. To our knowledge, this survey is the first to comprehensively cover the fundamentals of SemComNet, alongside a detailed analysis of its security and privacy issues.

Retrieval-augmented Generation (RAG) systems have been actively studied and deployed across various industries to query on domain-specific knowledge base. However, evaluating these systems presents unique challenges due to the scarcity of domain-specific queries and corresponding ground truths, as well as a lack of systematic approaches to diagnosing the cause of failure cases -- whether they stem from knowledge deficits or issues related to system robustness. To address these challenges, we introduce GRAMMAR (GRounded And Modular Methodology for Assessment of RAG), an evaluation framework comprising two key elements: 1) a data generation process that leverages relational databases and LLMs to efficiently produce scalable query-answer pairs. This method facilitates the separation of query logic from linguistic variations for enhanced debugging capabilities; and 2) an evaluation framework that differentiates knowledge gaps from robustness and enables the identification of defective modules. Our empirical results underscore the limitations of current reference-free evaluation approaches and the reliability of GRAMMAR to accurately identify model vulnerabilities.

This article presents the affordances that Generative Artificial Intelligence can have in disinformation context, one of the major threats to our digitalized society. We present a research framework to generate customized agent-based social networks for disinformation simulations that would enable understanding and evaluation of the phenomena whilst discussing open challenges.

Temporal characteristics are prominently evident in a substantial volume of knowledge, which underscores the pivotal role of Temporal Knowledge Graphs (TKGs) in both academia and industry. However, TKGs often suffer from incompleteness for three main reasons: the continuous emergence of new knowledge, the weakness of the algorithm for extracting structured information from unstructured data, and the lack of information in the source dataset. Thus, the task of Temporal Knowledge Graph Completion (TKGC) has attracted increasing attention, aiming to predict missing items based on the available information. In this paper, we provide a comprehensive review of TKGC methods and their details. Specifically, this paper mainly consists of three components, namely, 1)Background, which covers the preliminaries of TKGC methods, loss functions required for training, as well as the dataset and evaluation protocol; 2)Interpolation, that estimates and predicts the missing elements or set of elements through the relevant available information. It further categorizes related TKGC methods based on how to process temporal information; 3)Extrapolation, which typically focuses on continuous TKGs and predicts future events, and then classifies all extrapolation methods based on the algorithms they utilize. We further pinpoint the challenges and discuss future research directions of TKGC.

A fundamental goal of scientific research is to learn about causal relationships. However, despite its critical role in the life and social sciences, causality has not had the same importance in Natural Language Processing (NLP), which has traditionally placed more emphasis on predictive tasks. This distinction is beginning to fade, with an emerging area of interdisciplinary research at the convergence of causal inference and language processing. Still, research on causality in NLP remains scattered across domains without unified definitions, benchmark datasets and clear articulations of the remaining challenges. In this survey, we consolidate research across academic areas and situate it in the broader NLP landscape. We introduce the statistical challenge of estimating causal effects, encompassing settings where text is used as an outcome, treatment, or as a means to address confounding. In addition, we explore potential uses of causal inference to improve the performance, robustness, fairness, and interpretability of NLP models. We thus provide a unified overview of causal inference for the computational linguistics community.

北京阿比特科技有限公司