亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper explores the transformative potential of artificial intelligence (AI) in the context of sustainable agricultural development across diverse regions in Africa. Delving into opportunities, challenges, and impact, the study navigates through the dynamic landscape of AI applications in agriculture. Opportunities such as precision farming, crop monitoring, and climate-resilient practices are examined, alongside challenges related to technological infrastructure, data accessibility, and skill gaps. The article analyzes the impact of AI on smallholder farmers, supply chains, and inclusive growth. Ethical considerations and policy implications are also discussed, offering insights into responsible AI integration. By providing a nuanced understanding, this paper contributes to the ongoing discourse on leveraging AI for fostering sustainability in African agriculture.

相關內容

人工智能雜志AI(Artificial Intelligence)是目前公認的發表該領域最新研究成果的主要國際論壇。該期刊歡迎有關AI廣泛方面的論文,這些論文構成了整個領域的進步,也歡迎介紹人工智能應用的論文,但重點應該放在新的和新穎的人工智能方法如何提高應用領域的性能,而不是介紹傳統人工智能方法的另一個應用。關于應用的論文應該描述一個原則性的解決方案,強調其新穎性,并對正在開發的人工智能技術進行深入的評估。 官網地址:

The growing integration of large language models (LLMs) into social operations amplifies their impact on decisions in crucial areas such as economics, law, education, and healthcare, raising public concerns about these models' discrimination-related safety and reliability. However, prior discrimination measuring frameworks solely assess the average discriminatory behavior of LLMs, often proving inadequate due to the overlook of an additional discrimination-leading factor, i.e., the LLMs' prediction variation across diverse contexts. In this work, we present the Prejudice-Caprice Framework (PCF) that comprehensively measures discrimination in LLMs by considering both their consistently biased preference and preference variation across diverse contexts. Specifically, we mathematically dissect the aggregated contextualized discrimination risk of LLMs into prejudice risk, originating from LLMs' persistent prejudice, and caprice risk, stemming from their generation inconsistency. In addition, we utilize a data-mining approach to gather preference-detecting probes from sentence skeletons, devoid of attribute indications, to approximate LLMs' applied contexts. While initially intended for assessing discrimination in LLMs, our proposed PCF facilitates the comprehensive and flexible measurement of any inductive biases, including knowledge alongside prejudice, across various modality models. We apply our discrimination-measuring framework to 12 common LLMs, yielding intriguing findings: i) modern LLMs demonstrate significant pro-male stereotypes, ii) LLMs' exhibited discrimination correlates with several social and economic factors, iii) prejudice risk dominates the overall discrimination risk and follows a normal distribution, and iv) caprice risk contributes minimally to the overall risk but follows a fat-tailed distribution, suggesting that it is wild risk requiring enhanced surveillance.

This paper examines the current landscape of AI regulations across various jurisdictions, highlighting divergent approaches being taken, and proposes an alternative contextual, coherent, and commensurable (3C) framework to bridge the global divide. While the U.N. is developing an international AI governance framework and the G7 has endorsed a risk-based approach, there is no consensus on their details. The EU, Canada, and Brazil (and potentially South Korea) follow a horizontal or lateral approach that postulates the homogeneity of AI, seeks to identify common causes of harm, and demands uniform human interventions. In contrast, the U.S., the U.K., Israel, and Switzerland (and potentially China) have pursued a context-specific or modular approach, tailoring regulations to the specific use cases of AI systems. Horizonal approaches like the EU AI Act do not guarantee sufficient levels of proportionality and foreseeability; rather, this approach imposes a one-size-fits-all bundle of regulations on any high-risk AI, when feasible, to differentiate between various AI models and legislate them individually. The context-specific approach holds greater promise, but requires further development regarding details, coherent regulatory objectives, and commensurable standards. To strike a balance, this paper proposes a hybrid 3C framework. To ensure contextuality, the framework bifurcates the AI life cycle into two phases: learning and utilization for specific tasks; and categorizes these tasks based on their application and interaction with humans as follows: autonomous, discriminative (allocative, punitive, and cognitive), and generative AI. To ensure coherency, each category is assigned regulatory objectives. To ensure commensurability, the framework promotes the adoption of international industry standards that convert principles into quantifiable metrics to be readily integrated into AI systems.

This paper investigates the problem of noncoherent direction-of-arrival (DOA) estimation using different sparse subarrays. In particular, we present a Multiple Measurements Vector (MMV) model for noncoherent DOA estimation based on a low-rank and sparse recovery optimization problem. Moreover, we develop two different practical strategies to obtain sparse arrays and subarrays: i) the subarrays are generated from a main sparse array geometry (Type-I sparse array), and ii) the sparse subarrays that are directly designed and grouped together to generate the whole sparse array (Type-II sparse array). Numerical results demonstrate that the proposed MMV model can benefit from multiple data records and that Type-II sparse noncoherent arrays are superior in performance for DOA estimation

This article proposes a method to uncover opportunities for exploitation and exploration from consumer IoT interaction data. We develop a unique decomposition of cosine similarity that quantifies exploitation through functional similarity of interactions, exploration through cross-capacity similarity of counterfactual interactions, and differentiation of the two opportunities through within-similarity. We propose a topological data analysis method that incorporates these components of similarity and provides for their visualization. Functionally similar automations reveal exploitation opportunities for substitutes-in-use or complements-in-use, while exploration opportunities extend functionality into new use cases. This data-driven approach provides marketers with a powerful capability to discover possibilities for refining existing automation features while exploring new innovations. More generally, our approach can aid marketing efforts to balance these strategic opportunities in high technology contexts.

The task of multimedia geolocation is becoming an increasingly essential component of the digital forensics toolkit to effectively combat human trafficking, child sexual exploitation, and other illegal acts. Typically, metadata-based geolocation information is stripped when multimedia content is shared via instant messaging and social media. The intricacy of geolocating, geotagging, or finding geographical clues in this content is often overly burdensome for investigators. Recent research has shown that contemporary advancements in artificial intelligence, specifically computer vision and deep learning, show significant promise towards expediting the multimedia geolocation task. This systematic literature review thoroughly examines the state-of-the-art leveraging computer vision techniques for multimedia geolocation and assesses their potential to expedite human trafficking investigation. This includes a comprehensive overview of the application of computer vision-based approaches to multimedia geolocation, identifies their applicability in combating human trafficking, and highlights the potential implications of enhanced multimedia geolocation for prosecuting human trafficking. 123 articles inform this systematic literature review. The findings suggest numerous potential paths for future impactful research on the subject.

This paper presents the first systematic study of the evaluation of Deep Neural Networks (DNNs) for discrete dynamical systems under stochastic assumptions, with a focus on wildfire prediction. We develop a framework to study the impact of stochasticity on two classes of evaluation metrics: classification-based metrics, which assess fidelity to observed ground truth (GT), and proper scoring rules, which test fidelity-to-statistic. Our findings reveal that evaluating for fidelity-to-statistic is a reliable alternative in highly stochastic scenarios. We extend our analysis to real-world wildfire data, highlighting limitations in traditional wildfire prediction evaluation methods, and suggest interpretable stochasticity-compatible alternatives.

We propose two novel extensions of the Wyner common information optimization problem. Each relaxes one fundamental constraints in Wyner's formulation. The \textit{Variational Wyner Common Information} relaxes the matching constraint to the known distribution while imposing conditional independence to the feasible solution set. We derive a tight surrogate upper bound of the obtained unconstrained Lagrangian via the theory of variational inference, which can be minimized efficiently. Our solver caters to problems where conditional independence holds with significantly reduced computation complexity; On the other hand, the \textit{Bipartite Wyner Common Information} relaxes the conditional independence constraint whereas the matching condition is enforced on the feasible set. By leveraging the difference-of-convex structure of the formulated optimization problem, we show that our solver is resilient to conditional dependent sources. Both solvers are provably convergent (local stationary points), and empirically, they obtain more accurate solutions to Wyner's formulation with substantially less runtime. Moreover, them can be extended to unknown distribution settings by parameterizing the common randomness as a member of the exponential family of distributions. Our approaches apply to multi-modal clustering problems, where multiple modalities of observations come from the same cluster. Empirically, our solvers outperform the state-of-the-art multi-modal clustering algorithms with significantly improved performance.

Sociotechnical research increasingly includes the social sub-networks that emerge from large-scale sociotechnical infrastructure, including the infrastructure for building open source software. This paper addresses these numerous sub-networks as advantageous for researchers. It provides a methodological synthesis focusing on how researchers can best span adjacent social sub-networks during engaged field research. Specifically, we describe practices and artifacts that aid movement from one social subsystem within a more extensive technical infrastructure to another. To surface the importance of spanning sub-networks, we incorporate a discussion of social capital and the role of technical infrastructure in its development for sociotechnical researchers. We then characterize a five-step process for spanning social sub-networks during engaged field research: commitment, context mapping, jargon competence, returning value, and bridging. We then present our experience studying corporate open source software projects and the role of that experience in accelerating our work in open source scientific software research as described through the lens of bridging social capital. Based on our analysis, we offer recommendations for engaging in fieldwork in adjacent social sub-networks that share a technical context and discussion of how the relationship between social and technically acquired social capital is a missing but critical methodological dimension for research on large-scale sociotechnical research.

Human intelligence thrives on the concept of cognitive synergy, where collaboration and information integration among different cognitive processes yield superior outcomes compared to individual cognitive processes in isolation. Although Large Language Models (LLMs) have demonstrated promising performance as general task-solving agents, they still struggle with tasks that require intensive domain knowledge and complex reasoning. In this work, we propose Solo Performance Prompting (SPP), which transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas. A cognitive synergist refers to an intelligent agent that collaborates with multiple minds, combining their individual strengths and knowledge, to enhance problem-solving and overall performance in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. We have discovered that assigning multiple, fine-grained personas in LLMs elicits better problem-solving abilities compared to using a single or fixed number of personas. We evaluate SPP on three challenging tasks: Trivia Creative Writing, Codenames Collaborative, and Logic Grid Puzzle, encompassing both knowledge-intensive and reasoning-intensive types. Unlike previous works, such as Chain-of-Thought, that solely enhance the reasoning abilities in LLMs, SPP effectively elicits internal knowledge acquisition abilities, reduces hallucination, and maintains strong reasoning capabilities. Code, data, and prompts can be found at: //github.com/MikeWangWZHL/Solo-Performance-Prompting.git.

A fundamental goal of scientific research is to learn about causal relationships. However, despite its critical role in the life and social sciences, causality has not had the same importance in Natural Language Processing (NLP), which has traditionally placed more emphasis on predictive tasks. This distinction is beginning to fade, with an emerging area of interdisciplinary research at the convergence of causal inference and language processing. Still, research on causality in NLP remains scattered across domains without unified definitions, benchmark datasets and clear articulations of the remaining challenges. In this survey, we consolidate research across academic areas and situate it in the broader NLP landscape. We introduce the statistical challenge of estimating causal effects, encompassing settings where text is used as an outcome, treatment, or as a means to address confounding. In addition, we explore potential uses of causal inference to improve the performance, robustness, fairness, and interpretability of NLP models. We thus provide a unified overview of causal inference for the computational linguistics community.

北京阿比特科技有限公司