亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

There is mounting public concern over the influence that AI based systems has in our society. Coalitions in all sectors are acting worldwide to resist hamful applications of AI. From indigenous people addressing the lack of reliable data, to smart city stakeholders, to students protesting the academic relationships with sex trafficker and MIT donor Jeffery Epstein, the questionable ethics and values of those heavily investing in and profiting from AI are under global scrutiny. There are biased, wrongful, and disturbing assumptions embedded in AI algorithms that could get locked in without intervention. Our best human judgment is needed to contain AI's harmful impact. Perhaps one of the greatest contributions of AI will be to make us ultimately understand how important human wisdom truly is in life on earth.

相關內容

負(fu)責任(ren)的(de)人(ren)(ren)工(gong)(gong)智能(neng)是需(xu)要相關組(zu)織設立人(ren)(ren)工(gong)(gong)智能(neng)使用(yong)(yong)的(de)標準。首先,人(ren)(ren)工(gong)(gong)智能(neng)的(de)使用(yong)(yong)應(ying)該在各方(fang)面符合道德和法(fa)規;其次,從(cong)開(kai)發到使用(yong)(yong)需(xu)要有(you)一套健(jian)全(quan)的(de)管理機制;第三,需(xu)要強(qiang)有(you)力的(de)監管機制來確保(bao)其使用(yong)(yong)時(shi)的(de)公平(ping)公正(zheng)、通俗易(yi)懂(dong)、安全(quan)穩定(ding)。

This paper aims to help structure the risk landscape associated with large-scale Language Models (LMs). In order to foster advances in responsible innovation, an in-depth understanding of the potential risks posed by these models is needed. A wide range of established and anticipated risks are analysed in detail, drawing on multidisciplinary expertise and literature from computer science, linguistics, and social sciences. We outline six specific risk areas: I. Discrimination, Exclusion and Toxicity, II. Information Hazards, III. Misinformation Harms, V. Malicious Uses, V. Human-Computer Interaction Harms, VI. Automation, Access, and Environmental Harms. The first area concerns the perpetuation of stereotypes, unfair discrimination, exclusionary norms, toxic language, and lower performance by social group for LMs. The second focuses on risks from private data leaks or LMs correctly inferring sensitive information. The third addresses risks arising from poor, false or misleading information including in sensitive domains, and knock-on risks such as the erosion of trust in shared information. The fourth considers risks from actors who try to use LMs to cause harm. The fifth focuses on risks specific to LLMs used to underpin conversational agents that interact with human users, including unsafe use, manipulation or deception. The sixth discusses the risk of environmental harm, job automation, and other challenges that may have a disparate effect on different social groups or communities. In total, we review 21 risks in-depth. We discuss the points of origin of different risks and point to potential mitigation approaches. Lastly, we discuss organisational responsibilities in implementing mitigations, and the role of collaboration and participation. We highlight directions for further research, particularly on expanding the toolkit for assessing and evaluating the outlined risks in LMs.

Intelligent Personal Assistants (IPAs) like Amazon Alexa, Apple Siri, and Google Assistant are increasingly becoming a part of our everyday. As IPAs become ubiquitous and their applications expand, users turn to them for not just routine tasks, but also intelligent conversations. In this study, we measure the emotional intelligence (EI) displayed by IPAs in the English and Hindi languages; to our knowledge, this is a pioneering effort in probing the emotional intelligence of IPAs in Indian languages. We pose utterances that convey the Sadness or Humor emotion and evaluate IPA responses. We build on previous research to propose a quantitative and qualitative evaluation scheme encompassing new criteria from social science perspectives (display of empathy, wit, understanding) and IPA-specific features (voice modulation, search redirects). We find EI displayed by Google Assistant in Hindi is comparable to EI displayed in English, with the assistant employing both voice modulation and emojis in text. However, we do find that IPAs are unable to understand and respond intelligently to all queries, sometimes even offering counter-productive and problematic responses. Our experiment offers evidence and directions to augment the potential for EI in IPAs.

Policymakers face a broader challenge of how to view AI capabilities today and where does society stand in terms of those capabilities. This paper surveys AI capabilities and tackles this very issue, exploring it in context of political security in digital societies. We introduce a Matrix of Machine Influence to frame and navigate the adversarial applications of AI, and further extend the ideas of Information Management to better understand contemporary AI systems deployment as part of a complex information system. Providing a comprehensive review of man-machine interactions in our networked society and political systems, we suggest that better regulation and management of information systems can more optimally offset the risks of AI and utilise the emerging capabilities which these systems have to offer to policymakers and political institutions across the world. Hopefully this long essay will actuate further debates and discussions over these ideas, and prove to be a useful contribution towards governing the future of AI.

As the globally increasing population drives rapid urbanisation in various parts of the world, there is a great need to deliberate on the future of the cities worth living. In particular, as modern smart cities embrace more and more data-driven artificial intelligence services, it is worth remembering that technology can facilitate prosperity, wellbeing, urban livability, or social justice, but only when it has the right analog complements (such as well-thought out policies, mature institutions, responsible governance); and the ultimate objective of these smart cities is to facilitate and enhance human welfare and social flourishing. Researchers have shown that various technological business models and features can in fact contribute to social problems such as extremism, polarization, misinformation, and Internet addiction. In the light of these observations, addressing the philosophical and ethical questions involved in ensuring the security, safety, and interpretability of such AI algorithms that will form the technological bedrock of future cities assumes paramount importance. Globally there are calls for technology to be made more humane and human-centered. In this paper, we analyze and explore key challenges including security, robustness, interpretability, and ethical (data and algorithmic) challenges to a successful deployment of AI in human-centric applications, with a particular emphasis on the convergence of these concepts/challenges. We provide a detailed review of existing literature on these key challenges and analyze how one of these challenges may lead to others or help in solving other challenges. The paper also advises on the current limitations, pitfalls, and future directions of research in these domains, and how it can fill the current gaps and lead to better solutions. We believe such rigorous analysis will provide a baseline for future research in the domain.

We propose three novel consistent specification tests for quantile regression models which generalize former tests in three ways. First, we allow the covariate effects to be quantile-dependent and nonlinear. Second, we allow parameterizing the conditional quantile functions by appropriate basis functions, rather than parametrically. We are hence able to test for functional forms beyond linearity, while retaining the linear effects as special cases. In both cases, the induced class of conditional distribution functions is tested with a Cram\'{e}r-von Mises type test statistic for which we derive the theoretical limit distribution and propose a bootstrap method. Third, to increase the power of the tests, we further suggest a modified test statistic. We highlight the merits of our tests in a detailed MC study and two real data examples. Our first application to conditional income distributions in Germany indicates that there are not only still significant differences between East and West but also across the quantiles of the conditional income distributions, when conditioning on age and year. The second application to data from the Australian national electricity market reveals the importance of using interaction effects for modelling the highly skewed and heavy-tailed distributions of energy prices conditional on day, time of day and demand.

Federated Learning (FL) is a distributed machine learning protocol that allows a set of agents to collaboratively train a model without sharing their datasets. This makes FL particularly suitable for settings where data privacy is desired. However, it has been observed that the performance of FL is closely related to the similarity of the local data distributions of agents. Particularly, as the data distributions of agents differ, the accuracy of the trained models drop. In this work, we look at how variations in local data distributions affect the fairness and the robustness properties of the trained models in addition to the accuracy. Our experimental results indicate that, the trained models exhibit higher bias, and become more susceptible to attacks as local data distributions differ. Importantly, the degradation in the fairness, and robustness can be much more severe than the accuracy. Therefore, we reveal that small variations that have little impact on the accuracy could still be important if the trained model is to be deployed in a fairness/security critical context.

Modern cars are evolving in many ways. Technologies such as infotainment systems and companion mobile applications collect a variety of personal data from drivers to enhance the user experience. This paper investigates the extent to which car drivers understand the implications for their privacy, including that car manufacturers must treat that data in compliance with the relevant regulations. It does so by distilling out drivers' concerns on privacy and relating them to their perceptions of trust on car cyber-security. A questionnaire is designed for such purposes to collect answers from a set of 1101 participants, so that the results are statistically relevant. In short, privacy concerns are modest, perhaps because there still is insufficient general awareness on the personal data that are involved, both for in-vehicle treatment and for transmission over the Internet. Trust perceptions on cyber-security are modest too (lower than those on car safety), a surprising contradiction to our research hypothesis that privacy concerns and trust perceptions on car cyber-security are opponent. We interpret this as a clear demand for information and awareness-building campaigns for car drivers, as well as for technical cyber-security and privacy measures that are truly considerate of the human factor.

Randomized controlled trials are not only the golden standard in medicine and vaccine trials but have spread to many other disciplines like behavioral economics, making it an important interdisciplinary tool for scientists. When designing randomized controlled trials, how to assign participants to treatments becomes a key issue. In particular in the presence of covariate factors, the assignment can significantly influence statistical properties and thereby the quality of the trial. Another key issue is the widely popular assumption among experimenters that participants do not influence each other -- which is far from reality in a field study and can, if unaccounted for, deteriorate the quality of the trial. We address both issues in our work. After introducing randomized controlled trials bridging terms from different disciplines, we first address the issue of participant-treatment assignment in the presence of known covariate factors. Thereby, we review a recent assignment algorithm that achieves good worst-case variance bounds. Second, we address social spillover effects. Therefore, we build a comprehensive graph-based model of influence between participants, for which we design our own average treatment effect estimator $\hat \tau_{net}$. We discuss its bias and variance and reduce the problem of variance minimization to a certain instance of minimizing the norm of a matrix-vector product, which has been considered in literature before. Further, we discuss the role of disconnected components in the model's underlying graph.

To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.

In the last years, Artificial Intelligence (AI) has achieved a notable momentum that may deliver the best of expectations over many application sectors across the field. For this to occur, the entire community stands in front of the barrier of explainability, an inherent problem of AI techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI. Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is acknowledged as a crucial feature for the practical deployment of AI models. This overview examines the existing literature in the field of XAI, including a prospect toward what is yet to be reached. We summarize previous efforts to define explainability in Machine Learning, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought. We then propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at Deep Learning methods for which a second taxonomy is built. This literature analysis serves as the background for a series of challenges faced by XAI, such as the crossroads between data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to XAI with a reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.

北京阿比特科技有限公司