亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

As artificial intelligence continues its unprecedented global expansion, accompanied by a proliferation of benefits, an increasing apprehension about the privacy and security implications of AI-enabled systems emerges. The pivotal question of effectively controlling AI development at both jurisdictional and organizational levels has become a prominent theme in contemporary discourse. While the European Parliament and Council have taken a decisive step by reaching a political agreement on the EU AI Act, the first comprehensive AI law, organizations still find it challenging to adapt to the fast-evolving AI landscape, lacking a universal tool for evaluating the privacy and security dimensions of their AI models and systems. In response to this critical challenge, this study conducts a systematic literature review spanning the years 2020 to 2023, with a primary focus on establishing a unified definition of key concepts in AI Ethics, particularly emphasizing the domains of privacy and security. Through the synthesis of knowledge extracted from the SLR, this study presents a conceptual framework tailored for privacy- and security-aware AI systems. This framework is designed to assist diverse stakeholders, including organizations, academic institutions, and governmental bodies, in both the development and critical assessment of AI systems. Essentially, the proposed framework serves as a guide for ethical decision-making, fostering an environment wherein AI is developed and utilized with a strong commitment to ethical principles. In addition, the study unravels the key issues and challenges surrounding the privacy and security dimensions, delineating promising avenues for future research, thereby contributing to the ongoing dialogue on the globalization and democratization of AI ethics.

相關內容

人工智(zhi)能(neng)雜志AI(Artificial Intelligence)是目前公認的(de)(de)(de)(de)發(fa)表該(gai)領(ling)域最新研(yan)究成果的(de)(de)(de)(de)主要國際(ji)論(lun)壇。該(gai)期刊歡迎有關(guan)AI廣泛方(fang)面的(de)(de)(de)(de)論(lun)文(wen),這些論(lun)文(wen)構成了整個(ge)(ge)領(ling)域的(de)(de)(de)(de)進步,也(ye)歡迎介(jie)紹人工智(zhi)能(neng)應(ying)(ying)(ying)(ying)用的(de)(de)(de)(de)論(lun)文(wen),但重點應(ying)(ying)(ying)(ying)該(gai)放(fang)在新的(de)(de)(de)(de)和新穎(ying)的(de)(de)(de)(de)人工智(zhi)能(neng)方(fang)法(fa)如何提高應(ying)(ying)(ying)(ying)用領(ling)域的(de)(de)(de)(de)性能(neng),而(er)不是介(jie)紹傳統人工智(zhi)能(neng)方(fang)法(fa)的(de)(de)(de)(de)另一(yi)個(ge)(ge)應(ying)(ying)(ying)(ying)用。關(guan)于應(ying)(ying)(ying)(ying)用的(de)(de)(de)(de)論(lun)文(wen)應(ying)(ying)(ying)(ying)該(gai)描述一(yi)個(ge)(ge)原則(ze)性的(de)(de)(de)(de)解決(jue)方(fang)案,強調其新穎(ying)性,并對正在開發(fa)的(de)(de)(de)(de)人工智(zhi)能(neng)技術進行深入(ru)的(de)(de)(de)(de)評(ping)估(gu)。 官網地(di)址:

In many real-world large-scale decision problems, self-interested agents have individual dynamics and optimize their own long-term payoffs. Important examples include the competitive access to shared resources (e.g., roads, energy, or bandwidth) but also non-engineering domains like epidemic propagation and control. These problems are natural to model as mean-field games. However, existing mathematical formulations of mean field games have had limited applicability in practice, since they require solving non-standard initial-terminal-value problems that are tractable only in limited special cases. In this letter, we propose a novel formulation, along with computational tools, for a practically relevant class of Dynamic Population Games (DPGs), which correspond to discrete-time, finite-state-and-action, stationary mean-field games. Our main contribution is a mathematical reduction of Stationary Nash Equilibria (SNE) in DPGs to standard Nash Equilibria (NE) in static population games. This reduction is leveraged to guarantee the existence of a SNE, develop an evolutionary dynamics-based SNE computation algorithm, and derive simple conditions that guarantee stability and uniqueness of the SNE. Additionally, DPGs enable us to tractably incorporate multiple agent types, which is of particular importance to assess fairness concerns in resource allocation problems. We demonstrate our results by computing the SNE in two complex application examples: fair resource allocation with heterogeneous agents and control of epidemic propagation. Open source software for SNE computation: //gitlab.ethz.ch/elokdae/dynamic-population-games

We present a method of parameter estimation for large class of nonlinear systems, namely those in which the state consists of output derivatives and the flow is linear in the parameter. The method, which solves for the unknown parameter by directly inverting the dynamics using regularized linear regression, is based on new design and analysis ideas for differentiation filtering and regularized least squares. Combined in series, they yield a novel finite-sample bound on mean absolute error of estimation.

Amidst the robust impetus from artificial intelligence (AI) and big data, edge intelligence (EI) has emerged as a nascent computing paradigm, synthesizing AI with edge computing (EC) to become an exemplary solution for unleashing the full potential of AI services. Nonetheless, challenges in communication costs, resource allocation, privacy, and security continue to constrain its proficiency in supporting services with diverse requirements. In response to these issues, this paper introduces socialized learning (SL) as a promising solution, further propelling the advancement of EI. SL is a learning paradigm predicated on social principles and behaviors, aimed at amplifying the collaborative capacity and collective intelligence of agents within the EI system. SL not only enhances the system's adaptability but also optimizes communication, and networking processes, essential for distributed intelligence across diverse devices and platforms. Therefore, a combination of SL and EI may greatly facilitate the development of collaborative intelligence in the future network. This paper presents the findings of a literature review on the integration of EI and SL, summarizing the latest achievements in existing research on EI and SL. Subsequently, we delve comprehensively into the limitations of EI and how it could benefit from SL. Special emphasis is placed on the communication challenges and networking strategies and other aspects within these systems, underlining the role of optimized network solutions in improving system efficacy. Based on these discussions, we elaborate in detail on three integrated components: socialized architecture, socialized training, and socialized inference, analyzing their strengths and weaknesses. Finally, we identify some possible future applications of combining SL and EI, discuss open problems and suggest some future research.

Counterfactual reasoning, as a crucial manifestation of human intelligence, refers to making presuppositions based on established facts and extrapolating potential outcomes. Existing multimodal large language models (MLLMs) have exhibited impressive cognitive and reasoning capabilities, which have been examined across a wide range of Visual Question Answering (VQA) benchmarks. Nevertheless, how will existing MLLMs perform when faced with counterfactual questions? To answer this question, we first curate a novel \textbf{C}ounter\textbf{F}actual \textbf{M}ulti\textbf{M}odal reasoning benchmark, abbreviated as \textbf{CFMM}, to systematically assess the counterfactual reasoning capabilities of MLLMs. Our CFMM comprises six challenging tasks, each including hundreds of carefully human-labeled counterfactual questions, to evaluate MLLM's counterfactual reasoning capabilities across diverse aspects. Through experiments, interestingly, we find that existing MLLMs prefer to believe what they see, but ignore the counterfactual presuppositions presented in the question, thereby leading to inaccurate responses. Furthermore, we evaluate a wide range of prevalent MLLMs on our proposed CFMM. The significant gap between their performance on our CFMM and that on several VQA benchmarks indicates that there is still considerable room for improvement in existing MLLMs toward approaching human-level intelligence. On the other hand, through boosting MLLMs performances on our CFMM in the future, potential avenues toward developing MLLMs with advanced intelligence can be explored.

We introduce CHARM, the first benchmark for comprehensively and in-depth evaluating the commonsense reasoning ability of large language models (LLMs) in Chinese, which covers both globally known and Chinese-specific commonsense. We evaluated 7 English and 12 Chinese-oriented LLMs on CHARM, employing 5 representative prompt strategies for improving LLMs' reasoning ability, such as Chain-of-Thought. Our findings indicate that the LLM's language orientation and the task's domain influence the effectiveness of the prompt strategy, which enriches previous research findings. We built closely-interconnected reasoning and memorization tasks, and found that some LLMs struggle with memorizing Chinese commonsense, affecting their reasoning ability, while others show differences in reasoning despite similar memorization performance. We also evaluated the LLMs' memorization-independent reasoning abilities and analyzed the typical errors. Our study precisely identified the LLMs' strengths and weaknesses, providing the clear direction for optimization. It can also serve as a reference for studies in other fields. We will release CHARM at //github.com/opendatalab/CHARM .

The escalating challenge of misinformation, particularly in the context of political discourse, necessitates advanced solutions for fact-checking. We introduce innovative approaches to enhance the reliability and efficiency of multimodal fact-checking through the integration of Large Language Models (LLMs) with Retrieval-augmented Generation (RAG)- based advanced reasoning techniques. This work proposes two novel methodologies, Chain of RAG (CoRAG) and Tree of RAG (ToRAG). The approaches are designed to handle multimodal claims by reasoning the next questions that need to be answered based on previous evidence. Our approaches improve the accuracy of veracity predictions and the generation of explanations over the traditional fact-checking approach of sub-question generation with chain of thought veracity prediction. By employing multimodal LLMs adept at analyzing both text and images, this research advances the capability of automated systems in identifying and countering misinformation.

The prevalence of digital media and evolving sociopolitical dynamics have significantly amplified the dissemination of hateful content. Existing studies mainly focus on classifying texts into binary categories, often overlooking the continuous spectrum of offensiveness and hatefulness inherent in the text. In this research, we present an extensive benchmark dataset for Amharic, comprising 8,258 tweets annotated for three distinct tasks: category classification, identification of hate targets, and rating offensiveness and hatefulness intensities. Our study highlights that a considerable majority of tweets belong to the less offensive and less hate intensity levels, underscoring the need for early interventions by stakeholders. The prevalence of ethnic and political hatred targets, with significant overlaps in our dataset, emphasizes the complex relationships within Ethiopia's sociopolitical landscape. We build classification and regression models and investigate the efficacy of models in handling these tasks. Our results reveal that hate and offensive speech can not be addressed by a simplistic binary classification, instead manifesting as variables across a continuous range of values. The Afro-XLMR-large model exhibits the best performances achieving F1-scores of 75.30%, 70.59%, and 29.42% for the category, target, and regression tasks, respectively. The 80.22% correlation coefficient of the Afro-XLMR-large model indicates strong alignments.

The advent of large language models marks a revolutionary breakthrough in artificial intelligence. With the unprecedented scale of training and model parameters, the capability of large language models has been dramatically improved, leading to human-like performances in understanding, language synthesizing, and common-sense reasoning, etc. Such a major leap-forward in general AI capacity will change the pattern of how personalization is conducted. For one thing, it will reform the way of interaction between humans and personalization systems. Instead of being a passive medium of information filtering, large language models present the foundation for active user engagement. On top of such a new foundation, user requests can be proactively explored, and user's required information can be delivered in a natural and explainable way. For another thing, it will also considerably expand the scope of personalization, making it grow from the sole function of collecting personalized information to the compound function of providing personalized services. By leveraging large language models as general-purpose interface, the personalization systems may compile user requests into plans, calls the functions of external tools to execute the plans, and integrate the tools' outputs to complete the end-to-end personalization tasks. Today, large language models are still being developed, whereas the application in personalization is largely unexplored. Therefore, we consider it to be the right time to review the challenges in personalization and the opportunities to address them with LLMs. In particular, we dedicate this perspective paper to the discussion of the following aspects: the development and challenges for the existing personalization system, the newly emerged capabilities of large language models, and the potential ways of making use of large language models for personalization.

Human intelligence thrives on the concept of cognitive synergy, where collaboration and information integration among different cognitive processes yield superior outcomes compared to individual cognitive processes in isolation. Although Large Language Models (LLMs) have demonstrated promising performance as general task-solving agents, they still struggle with tasks that require intensive domain knowledge and complex reasoning. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist refers to an intelligent agent that collaborates with multiple minds, combining their individual strengths and knowledge, to enhance problem-solving and overall performance in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. We have discovered that assigning multiple, fine-grained personas in LLMs elicits better problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, SPP effectively elicits internal knowledge acquisition abilities, reduces hallucination, and maintains strong reasoning capabilities. Code, data, and prompts can be found at: //github.com/MikeWangWZHL/Solo-Performance-Prompting.git.

This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.

北京阿比特科技有限公司