亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Participatory design initiatives, especially within the realm of digital civics, are often integrated and co-developed with the very citizens and communities they intend to assist. Digital civics research aims to create positive social change using a variety of digital technologies. These research projects commonly adopt various embedded processes, such as commissioning models \cite{dcitizensproj22}. Despite the adoption of this process within a range of domains, there isn't currently a framework for best practices and accountability procedures to ensure we engage with citizens ethically and ensure the sustainability of our projects. This workshop aims to provide a space to start collaboratively constructing a dynamic framework of best practices, laying the groundwork for the future of sustainable embedded research processes. The overarching goal is to foster discussions and share insights that contribute to developing effective practices, ensuring the longevity and impact of participatory digital civics projects.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

In recent years, Large language models (LLMs) have garnered significant attention due to their superior performance in complex reasoning tasks. However, recent studies may diminish their reasoning capabilities markedly when problem descriptions contain irrelevant information, even with the use of advanced prompting techniques. To further investigate this issue, a dataset of primary school mathematics problems containing irrelevant information, named GSMIR, was constructed. Testing prominent LLMs and prompting techniques on this dataset revealed that while LLMs can identify irrelevant information, they do not effectively mitigate the interference it causes once identified. A novel automatic construction method, ATF, which enhances the ability of LLMs to identify and self-mitigate the influence of irrelevant information, is proposed to address this shortcoming. This method operates in two steps: first, analysis of irrelevant information, followed by its filtering. The ATF method, as demonstrated by experimental results, significantly improves the reasoning performance of LLMs and prompting techniques, even in the presence of irrelevant information on the GSMIR dataset.

In the era of advanced artificial intelligence, highlighted by large-scale generative models like GPT-4, ensuring the traceability, verifiability, and reproducibility of datasets throughout their lifecycle is paramount for research institutions and technology companies. These organisations increasingly rely on vast corpora to train and fine-tune advanced AI models, resulting in intricate data supply chains that demand effective data governance mechanisms. In addition, the challenge intensifies as diverse stakeholders may use assorted tools, often without adequate measures to ensure the accountability of data and the reliability of outcomes. In this study, we adapt the concept of ``Software Bill of Materials" into the field of data governance and management to address the above challenges, and introduce ``Data Bill of Materials" (DataBOM) to capture the dependency relationship between different datasets and stakeholders by storing specific metadata. We demonstrate a platform architecture for providing blockchain-based DataBOM services, present the interaction protocol for stakeholders, and discuss the minimal requirements for DataBOM metadata. The proposed solution is evaluated in terms of feasibility and performance via case study and quantitative analysis respectively.

The integration of artificial intelligence (AI) across contemporary industries is not just a technological upgrade but a transformation with profound structural implications. This paper explores the concept of structural risks associated with the rapid integration of advanced AI systems across social, economic, and political systems. This framework challenges the conventional perspectives that primarily focus on direct AI threats such as accidents and misuse and suggests that these more proximate risks are interconnected and influenced by a larger sociotechnical system. By analyzing the interactions between technological advancements and social dynamics, this study isolates three primary categories of structural risk: antecedent structural causes, antecedent system causes, and deleterious feedback loops. We present a comprehensive framework to understand the causal chains that drive these risks, highlighting the interdependence between structural forces and the more proximate risks of misuse and system failures. The paper articulates how unchecked AI advancement can reshape power dynamics, trust, and incentive structures, leading to profound and often unpredictable shifts. We introduce a methodological research agenda for mapping, simulating, and gaming these dynamics aimed at preparing policymakers and national security officials for the challenges posed by next-generation AI technologies. The paper concludes with policy recommendations.

We present a systematic review, an empirical study, and a first set of considerations for designing visualizations in motion, derived from a concrete scenario in which these visualizations were used to support a primary task. In practice, when viewers are confronted with embedded visualizations, they often have to focus on a primary task and can only quickly glance at a visualization showing rich, often dynamically updated, information. As such, the visualizations must be designed so as not to distract from the primary task, while at the same time being readable and useful for aiding the primary task. For example, in games, players who are engaged in a battle have to look at their enemies but also read the remaining health of their own game character from the health bar over their character's head. Many trade-offs are possible in the design of embedded visualizations in such dynamic scenarios, which we explore in-depth in this paper with a focus on user experience. We use video games as an example of an application context with a rich existing set of visualizations in motion. We begin our work with a systematic review of in-game visualizations in motion. Next, we conduct an empirical user study to investigate how different embedded visualizations in motion designs impact user experience. We conclude with a set of considerations and trade-offs for designing visualizations in motion more broadly as derived from what we learned about video games. All supplemental materials of this paper are available at //osf.io/3v8wm/}.

Experimentation is an essential method for causal inference in any empirical discipline. Crossover-design experiments are common in Software Engineering (SE) research. In these, subjects apply more than one treatment in different orders. This design increases the amount of obtained data and deals with subject variability but introduces threats to internal validity like the learning and carryover effect. Vegas et al. reviewed the state of practice for crossover designs in SE research and provided guidelines on how to address its threats during data analysis while still harnessing its benefits. In this paper, we reflect on the impact of these guidelines and review the state of analysis of crossover design experiments in SE publications between 2015 and March 2024. To this end, by conducting a forward snowballing of the guidelines, we survey 136 publications reporting 67 crossover-design experiments and evaluate their data analysis against the provided guidelines. The results show that the validity of data analyses has improved compared to the original state of analysis. Still, despite the explicit guidelines, only 29.5% of all threats to validity were addressed properly. While the maturation and the optimal sequence threats are properly addressed in 35.8% and 38.8% of all studies in our sample respectively, the carryover threat is only modeled in about 3% of the observed cases. The lack of adherence to the analysis guidelines threatens the validity of the conclusions drawn from crossover design experiments

AI-generated synthetic media, also called Deepfakes, have significantly influenced so many domains, from entertainment to cybersecurity. Generative Adversarial Networks (GANs) and Diffusion Models (DMs) are the main frameworks used to create Deepfakes, producing highly realistic yet fabricated content. While these technologies open up new creative possibilities, they also bring substantial ethical and security risks due to their potential misuse. The rise of such advanced media has led to the development of a cognitive bias known as Impostor Bias, where individuals doubt the authenticity of multimedia due to the awareness of AI's capabilities. As a result, Deepfake detection has become a vital area of research, focusing on identifying subtle inconsistencies and artifacts with machine learning techniques, especially Convolutional Neural Networks (CNNs). Research in forensic Deepfake technology encompasses five main areas: detection, attribution and recognition, passive authentication, detection in realistic scenarios, and active authentication. This paper reviews the primary algorithms that address these challenges, examining their advantages, limitations, and future prospects.

Recent and unremitting capability advances have been accompanied by calls for comprehensive, rather than patchwork, regulation of frontier artificial intelligence (AI). Approval regulation is emerging as a promising candidate. An approval regulation scheme is one in which a firm cannot legally market, or in some cases develop, a product without explicit approval from a regulator on the basis of experiments performed upon the product that demonstrate its safety. This approach is used successfully by the FDA and FAA. Further, its application to frontier AI has been publicly supported by many prominent stakeholders. This report proposes an approval regulation schematic for only the largest AI projects in which scrutiny begins before training and continues through to post-deployment monitoring. The centerpieces of the schematic are two major approval gates, the first requiring approval for large-scale training and the second for deployment. Five main challenges make implementation difficult: noncompliance through unsanctioned deployment, specification of deployment readiness requirements, reliable model experimentation, filtering out safe models before the process, and minimizing regulatory overhead. This report makes a number of crucial recommendations to increase the feasibility of approval regulation, some of which must be followed urgently if such a regime is to succeed in the near future. Further recommendations, produced by this report's analysis, may improve the effectiveness of any regulatory regime for frontier AI.

OpenFlow switches are fundamental components of software defined networking, where the key operation is to look up flow tables to determine which flow an incoming packet belongs to. This needs to address the same multi-field rule-matching problem as legacy packet classification, but faces more serious scalability challenges. The demand of fast on-line updates makes most existing solutions unfit, while the rest still lacks the scalability to either large data sets or large number of fields to match for a rule. In this work, we propose TupleChain for fast OpenFlow table lookup with multifaceted scalability. We group rules based on their masks, each being maintained with a hash table, and explore the connections among rule groups to skip unnecessary hash probes for fast search. We show via theoretical analysis and extensive experiments that the proposed scheme not only has competitive computing complexity, but is also scalable and can achieve high performance in both search and update. It can process multiple millions of packets per second, while dealing with millions of on-line updates per second at the same time, and its lookup speed maintains at the same level no mater it handles a large flow table with 10 million rules or a flow table with every entry having as many as 100 match fields.

Analyzing large sets of visual media remains a challenging task, particularly in mixed-method studies dealing with problematic information and human subjects. Using AI tools in such analyses risks reifying and exacerbating biases, as well as untenable computational and cost limitations. As such, we turn to adopting geometric computer graphics and vision methods towards analyzing a large set of images from a problematic information campaign, in conjunction with human-in-the-loop qualitative analysis. We illustrate an effective case of this approach with the implementation of color quantization towards analyzing online hate image at the US-Mexico border, along with a historicist trace of the history of color quantization and skin tone scales, to inform our usage and reclamation of these methodologies from their racist origins. To that end, we scaffold motivations and the need for more researchers to consider the advantages and risks of reclaiming such methodologies in their own work, situated in our case study.

The advent of large language models marks a revolutionary breakthrough in artificial intelligence. With the unprecedented scale of training and model parameters, the capability of large language models has been dramatically improved, leading to human-like performances in understanding, language synthesizing, and common-sense reasoning, etc. Such a major leap-forward in general AI capacity will change the pattern of how personalization is conducted. For one thing, it will reform the way of interaction between humans and personalization systems. Instead of being a passive medium of information filtering, large language models present the foundation for active user engagement. On top of such a new foundation, user requests can be proactively explored, and user's required information can be delivered in a natural and explainable way. For another thing, it will also considerably expand the scope of personalization, making it grow from the sole function of collecting personalized information to the compound function of providing personalized services. By leveraging large language models as general-purpose interface, the personalization systems may compile user requests into plans, calls the functions of external tools to execute the plans, and integrate the tools' outputs to complete the end-to-end personalization tasks. Today, large language models are still being developed, whereas the application in personalization is largely unexplored. Therefore, we consider it to be the right time to review the challenges in personalization and the opportunities to address them with LLMs. In particular, we dedicate this perspective paper to the discussion of the following aspects: the development and challenges for the existing personalization system, the newly emerged capabilities of large language models, and the potential ways of making use of large language models for personalization.

北京阿比特科技有限公司