亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Efforts to promote equitable public policy with algorithms appear to be fundamentally constrained by the "impossibility of fairness" (an incompatibility between mathematical definitions of fairness). This technical limitation raises a central question about algorithmic fairness: How can computer scientists and policymakers support equitable policy reforms with algorithms? In this article, I argue that promoting justice with algorithms requires reforming the methodology of algorithmic fairness. First, I diagnose the problems of the current methodology for algorithmic fairness, which I call "formal algorithmic fairness." Because formal algorithmic fairness restricts analysis to isolated decision-making procedures, it leads to the impossibility of fairness and to models that exacerbate oppression despite appearing "fair." Second, I draw on theories of substantive equality from law and philosophy to propose an alternative methodology, which I call "substantive algorithmic fairness." Because substantive algorithmic fairness takes a more expansive scope of analysis, it enables an escape from the impossibility of fairness and provides a rigorous guide for alleviating injustice with algorithms. In sum, substantive algorithmic fairness presents a new direction for algorithmic fairness: away from formal mathematical models of "fair" decision-making and toward substantive evaluations of whether and how algorithms can promote justice in practice.

相關內容

Programmers' mental models represent their knowledge and understanding of programs, programming concepts, and programming in general. They guide programmers' work and influence their task performance. Understanding mental models is important for designing work systems and practices that support programmers. Although the importance of programmers' mental models is widely acknowledged, research on mental models has decreased over the years. The results are scattered and do not take into account recent developments in software engineering. We analyze the state of research into programmers' mental models and provide an overview of existing research. We connect results on mental models from different strands of research to form a more unified knowledge base on the topic. We conducted a systematic literature review on programmers' mental models. We analyzed literature addressing mental models in different contexts, including mental models of programs, programming tasks, and programming concepts. Using nine search engines, we found 3678 articles (excluding duplicates). 84 were selected for further analysis. Using the snowballing technique, we obtained a final result set containing 187 articles. We show that the literature shares a kernel of shared understanding of mental models. By collating and connecting results on mental models from different fields of research, we uncovered some well-researched aspects, which we argue are fundamental characteristics of programmers' mental models. This work provides a basis for future work on mental models. The research field on programmers' mental models still faces many challenges rising from a lack of a shared knowledge base and poorly defined constructs. We created a unified knowledge base on the topic. We also point to directions for future studies. In particular, we call for studies that examine programmers working with modern practices and tools.

The polynomial kernels are widely used in machine learning and they are one of the default choices to develop kernel-based classification and regression models. However, they are rarely used and considered in numerical analysis due to their lack of strict positive definiteness. In particular they do not enjoy the usual property of unisolvency for arbitrary point sets, which is one of the key properties used to build kernel-based interpolation methods. This paper is devoted to establish some initial results for the study of these kernels, and their related interpolation algorithms, in the context of approximation theory. We will first prove necessary and sufficient conditions on point sets which guarantee the existence and uniqueness of an interpolant. We will then study the Reproducing Kernel Hilbert Spaces (or native spaces) of these kernels and their norms, and provide inclusion relations between spaces corresponding to different kernel parameters. With these spaces at hand, it will be further possible to derive generic error estimates which apply to sufficiently smooth functions, thus escaping the native space. Finally, we will show how to employ an efficient stable algorithm to these kernels to obtain accurate interpolants, and we will test them in some numerical experiment. After this analysis several computational and theoretical aspects remain open, and we will outline possible further research directions in a concluding section. This work builds some bridges between kernel and polynomial interpolation, two topics to which the authors, to different extents, have been introduced under the supervision or through the work of Stefano De Marchi. For this reason, they wish to dedicate this work to him in the occasion of his 60th birthday.

In this paper, we investigate the optimal robot path planning problem for high-level specifications described by co-safe linear temporal logic (LTL) formulae. We consider the scenario where the map geometry of the workspace is partially-known. Specifically, we assume that there are some unknown regions, for which the robot does not know their successor regions a priori unless it reaches these regions physically. In contrast to the standard game-based approach that optimizes the worst-case cost, in the paper, we propose to use regret as a new metric for planning in such a partially-known environment. The regret of a plan under a fixed but unknown environment is the difference between the actual cost incurred and the best-response cost the robot could have achieved if it realizes the actual environment with hindsight. We provide an effective algorithm for finding an optimal plan that satisfies the LTL specification while minimizing its regret. A case study on firefighting robots is provided to illustrate the proposed framework. We argue that the new metric is more suitable for the scenario of partially-known environment since it captures the trade-off between the actual cost spent and the potential benefit one may obtain for exploring an unknown region.

Fairness of machine learning (ML) software has become a major concern in the recent past. Although recent research on testing and improving fairness have demonstrated impact on real-world software, providing fairness guarantee in practice is still lacking. Certification of ML models is challenging because of the complex decision-making process of the models. In this paper, we proposed Fairify, an SMT-based approach to verify individual fairness property in neural network (NN) models. Individual fairness ensures that any two similar individuals get similar treatment irrespective of their protected attributes e.g., race, sex, age. Verifying this fairness property is hard because of the global checking and non-linear computation nodes in NN. We proposed sound approach to make individual fairness verification tractable for the developers. The key idea is that many neurons in the NN always remain inactive when a smaller part of the input domain is considered. So, Fairify leverages whitebox access to the models in production and then apply formal analysis based pruning. Our approach adopts input partitioning and then prunes the NN for each partition to provide fairness certification or counterexample. We leveraged interval arithmetic and activation heuristic of the neurons to perform the pruning as necessary. We evaluated Fairify on 25 real-world neural networks collected from four different sources, and demonstrated the effectiveness, scalability and performance over baseline and closely related work. Fairify is also configurable based on the domain and size of the NN. Our novel formulation of the problem can answer targeted verification queries with relaxations and counterexamples, which have practical implications.

The tool developed as a result of this paper analyzes the ease and equity of access to major POI categories (e.g. vaccination centers, grocery stores, hospitals) using public transit in major U.S. cities. We built an interactive website that enables easy exploration of current access equity and allows performing scenario analysis by introducing/removing POIs. Accessibility indices were calculated using a 2SFCA (2-step floating catchment area) approach, and ML methods were utilized for exploratory statistical analysis of the results.

The study of biases, such as gender or racial biases, is an important topic in the social and behavioural sciences. However, the literature does not always clearly define the concept. Definitions of bias are often ambiguous or not provided at all. To study biases in a precise manner, it is important to have a well-defined concept of bias. We propose to define bias as a direct causal effect that is unjustified. We propose to define the closely related concept of disparity as a direct or indirect causal effect that includes a bias. Our proposed definitions can be used to study biases and disparities in a more rigorous and systematic way. We compare our definitions of bias and disparity with various criteria of fairness introduced in the artificial intelligence literature. We also illustrate our definitions in two case studies, focusing on gender bias in science and racial bias in police shootings. Our proposed definitions aim to contribute to a better appreciation of the causal intricacies of studies of biases and disparities. We hope that this will also promote an improved understanding of the policy implications of such studies.

As the real-world impact of Artificial Intelligence (AI) systems has been steadily growing, so too have these systems come under increasing scrutiny. In particular, the study of AI fairness has rapidly developed into a rich field of research with links to computer science, social science, law, and philosophy. Though many technical solutions for measuring and achieving AI fairness have been proposed, their model of AI fairness has been widely criticized in recent years for being misleading and unrealistic. In our paper, we survey these criticisms of AI fairness and identify key limitations that are inherent to the prototypical paradigm of AI fairness. By carefully outlining the extent to which technical solutions can realistically help in achieving AI fairness, we aim to provide readers with the background necessary to form a nuanced opinion on developments in the field of fair AI. This delineation also provides research opportunities for non-AI solutions peripheral to AI systems in supporting fair decision processes.

Collaborative research causes problems for research assessments because of the difficulty in fairly crediting its authors. Whilst splitting the rewards for an article amongst its authors has the greatest surface-level fairness, many important evaluations assign full credit to each author, irrespective of team size. The underlying rationales for this are labour reduction and the need to incentivise collaborative work because it is necessary to solve many important societal problems. This article assesses whether full counting changes results compared to fractional counting in the case of the UK's Research Excellence Framework (REF) 2021. For this assessment, fractional counting reduces the number of journal articles to as little as 10% of the full counting value, depending on the Unit of Assessment (UoA). Despite this large difference, allocating an overall grade point average (GPA) based on full counting or fractional counting give results with a median Pearson correlation within UoAs of 0.98. The largest changes are for Archaeology (r=0.84) and Physics (r=0.88). There is a weak tendency for higher scoring institutions to lose from fractional counting, with the loss being statistically significant in 5 of the 34 UoAs. Thus, whilst the apparent over-weighting of contributions to collaboratively authored outputs does not seem too problematic from a fairness perspective overall, it may be worth examining in the few UoAs in which it makes the most difference.

Graph machine learning has been extensively studied in both academic and industry. However, as the literature on graph learning booms with a vast number of emerging methods and techniques, it becomes increasingly difficult to manually design the optimal machine learning algorithm for different graph-related tasks. To tackle the challenge, automated graph machine learning, which aims at discovering the best hyper-parameter and neural architecture configuration for different graph tasks/data without manual design, is gaining an increasing number of attentions from the research community. In this paper, we extensively discuss automated graph machine approaches, covering hyper-parameter optimization (HPO) and neural architecture search (NAS) for graph machine learning. We briefly overview existing libraries designed for either graph machine learning or automated machine learning respectively, and further in depth introduce AutoGL, our dedicated and the world's first open-source library for automated graph machine learning. Last but not least, we share our insights on future research directions for automated graph machine learning. This paper is the first systematic and comprehensive discussion of approaches, libraries as well as directions for automated graph machine learning.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

北京阿比特科技有限公司