亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Recent conversations in the algorithmic fairness literature have raised several concerns with standard conceptions of fairness. First, constraining predictive algorithms to satisfy fairness benchmarks may lead to non-optimal outcomes for disadvantaged groups. Second, technical interventions are often ineffective by themselves, especially when divorced from an understanding of structural processes that generate social inequality. Inspired by both these critiques, we construct a common decision-making model, using mortgage loans as a running example. We show that under some conditions, any choice of decision threshold will inevitably perpetuate existing disparities in financial stability unless one deviates from the Pareto optimal policy. Then, we model the effects of three different types of interventions. We show how different interventions are recommended depending upon the difficulty of enacting structural change upon external parameters and depending upon the policymaker's preferences for equity or efficiency. Counterintuitively, we demonstrate that preferences for efficiency over equity may lead to recommendations for interventions that target the under-resourced group. Finally, we simulate the effects of interventions on a dataset that combines HMDA and Fannie Mae loan data. This research highlights the ways that structural inequality can be perpetuated by seemingly unbiased decision mechanisms, and it shows that in many situations, technical solutions must be paired with external, context-aware interventions to enact social change.

相關內容

The moral value of liberty is a central concept in our inference system when it comes to taking a stance towards controversial social issues such as vaccine hesitancy, climate change, or the right to abortion. Here, we propose a novel Liberty lexicon evaluated on more than 3,000 manually annotated data both in in- and out-of-domain scenarios. As a result of this evaluation, we produce a combined lexicon that constitutes the main outcome of this work. This final lexicon incorporates information from an ensemble of lexicons that have been generated using word embedding similarity (WE) and compositional semantics (CS). Our key contributions include enriching the liberty annotations, developing a robust liberty lexicon for broader application, and revealing the complexity of expressions related to liberty across different platforms. Through the evaluation, we show that the difficulty of the task calls for designing approaches that combine knowledge, in an effort of improving the representations of learning systems.

Low discrepancy point sets have been widely used as a tool to approximate continuous objects by discrete ones in numerical processes, for example in numerical integration. Following a century of research on the topic, it is still unclear how low the discrepancy of point sets can go; in other words, how regularly distributed can points be in a given space. Recent insights using optimization and machine learning techniques have led to substantial improvements in the construction of low-discrepancy point sets, resulting in configurations of much lower discrepancy values than previously known. Building on the optimal constructions, we present a simple way to obtain $L_{\infty}$-optimized placement of points that follow the same relative order as an (arbitrary) input set. Applying this approach to point sets in dimensions 2 and 3 for up to 400 and 50 points, respectively, we obtain point sets whose $L_{\infty}$ star discrepancies are up to 25% smaller than those of the current-best sets, and around 50% better than classical constructions such as the Fibonacci set.

Abstract. The risks AI presents to society are broadly understood to be manageable through general calculus, i.e., general frameworks designed to enable those involved in the development of AI to apprehend and manage risk, such as AI impact assessments, ethical frameworks, emerging international standards, and regulations. This paper elaborates how risk is apprehended and managed by a regulator, developer and cyber-security expert. It reveals that risk and risk management is dependent on mundane situated practices not encapsulated in general calculus. Situated practice surfaces iterable epistopics, revealing how those involved in the development of AI know and subsequently respond to risk and uncover major challenges in their work. The ongoing discovery and elaboration of epistopics of risk in AI a) furnishes a potential program of interdisciplinary inquiry, b) provides AI developers with a means of apprehending risk, and c) informs the ongoing evolution of general calculus.

Game semantics is a denotational semantics presenting compositionally the computational behaviour of various kinds of effectful programs. One of its celebrated achievement is to have obtained full abstraction results for programming languages with a variety of computational effects, in a single framework. This is known as the semantic cube or Abramsky's cube, which for sequential deterministic programs establishes a correspondence between certain conditions on strategies (''innocence'', ''well-bracketing'', ''visibility'') and the absence of matching computational effects. Outside of the sequential deterministic realm, there are still a wealth of game semantics-based full abstraction results; but they no longer fit in a unified canvas. In particular, Ghica and Murawski's fully abstract model for shared state concurrency (IA) does not have a matching notion of pure parallel program-we say that parallelism and interference (i.e. state plus semaphores) are entangled. In this paper we construct a causal version of Ghica and Murawski's model, also fully abstract for IA. We provide compositional conditions parallel innocence and sequentiality, respectively banning interference and parallelism, and leading to four full abstraction results. To our knowledge, this is the first extension of Abramsky's semantic cube programme beyond the sequential deterministic world.

The SETH is a hypothesis of fundamental importance to (fine-grained) parameterized complexity theory and many important tight lower bounds are based on it. This situation is somewhat problematic, because the validity of the SETH is not universally believed and because in some senses the SETH seems to be "too strong" a hypothesis for the considered lower bounds. Motivated by this, we consider a number of reasonable weakenings of the SETH that render it more plausible, with sources ranging from circuit complexity, to backdoors for SAT-solving, to graph width parameters, to weighted satisfiability problems. Despite the diversity of the different formulations, we are able to uncover several non-obvious connections using tools from classical complexity theory. This leads us to a hierarchy of five main equivalence classes of hypotheses, with some of the highlights being the following: We show that beating brute force search for SAT parameterized by a modulator to a graph of bounded pathwidth, or bounded treewidth, or logarithmic tree-depth, is actually the same question, and is in fact equivalent to beating brute force for circuits of depth $\epsilon n$; we show that beating brute force search for a strong 2-SAT backdoor is equivalent to beating brute force search for a modulator to logarithmic pathwidth; we show that beting brute force search for a strong Horn backdoor is equivalent to beating brute force search for arbitrary circuit SAT.

We consider the challenging problem of estimating causal effects from purely observational data in the bi-directional Mendelian randomization (MR), where some invalid instruments, as well as unmeasured confounding, usually exist. To address this problem, most existing methods attempt to find proper valid instrumental variables (IVs) for the target causal effect by expert knowledge or by assuming that the causal model is a one-directional MR model. As such, in this paper, we first theoretically investigate the identification of the bi-directional MR from observational data. In particular, we provide necessary and sufficient conditions under which valid IV sets are correctly identified such that the bi-directional MR model is identifiable, including the causal directions of a pair of phenotypes (i.e., the treatment and outcome). Moreover, based on the identification theory, we develop a cluster fusion-like method to discover valid IV sets and estimate the causal effects of interest. We theoretically demonstrate the correctness of the proposed algorithm. Experimental results show the effectiveness of our method for estimating causal effects in bi-directional MR.

In many situations, the measurements of a studied phenomenon are provided sequentially, and the prediction of its class needs to be made as early as possible so as not to incur too high a time penalty, but not too early and risk paying the cost of misclassification. This problem has been particularly studied in the case of time series, and is known as Early Classification of Time Series (ECTS). Although it has been the subject of a growing body of literature, there is still a lack of a systematic, shared evaluation protocol to compare the relative merits of the various existing methods. This document begins by situating these methods within a principle-based taxonomy. It defines dimensions for organizing their evaluation, and then reports the results of a very extensive set of experiments along these dimensions involving nine state-of-the art ECTS algorithms. In addition, these and other experiments can be carried out using an open-source library in which most of the existing ECTS algorithms have been implemented (see \url{//github.com/ML-EDM/ml_edm}).

In the realm of self-supervised learning (SSL), masked image modeling (MIM) has gained popularity alongside contrastive learning methods. MIM involves reconstructing masked regions of input images using their unmasked portions. A notable subset of MIM methodologies employs discrete tokens as the reconstruction target, but the theoretical underpinnings of this choice remain underexplored. In this paper, we explore the role of these discrete tokens, aiming to unravel their benefits and limitations. Building upon the connection between MIM and contrastive learning, we provide a comprehensive theoretical understanding on how discrete tokenization affects the model's generalization capabilities. Furthermore, we propose a novel metric named TCAS, which is specifically designed to assess the effectiveness of discrete tokens within the MIM framework. Inspired by this metric, we contribute an innovative tokenizer design and propose a corresponding MIM method named ClusterMIM. It demonstrates superior performance on a variety of benchmark datasets and ViT backbones. Code is available at //github.com/PKU-ML/ClusterMIM.

Understanding causality helps to structure interventions to achieve specific goals and enables predictions under interventions. With the growing importance of learning causal relationships, causal discovery tasks have transitioned from using traditional methods to infer potential causal structures from observational data to the field of pattern recognition involved in deep learning. The rapid accumulation of massive data promotes the emergence of causal search methods with brilliant scalability. Existing summaries of causal discovery methods mainly focus on traditional methods based on constraints, scores and FCMs, there is a lack of perfect sorting and elaboration for deep learning-based methods, also lacking some considers and exploration of causal discovery methods from the perspective of variable paradigms. Therefore, we divide the possible causal discovery tasks into three types according to the variable paradigm and give the definitions of the three tasks respectively, define and instantiate the relevant datasets for each task and the final causal model constructed at the same time, then reviews the main existing causal discovery methods for different tasks. Finally, we propose some roadmaps from different perspectives for the current research gaps in the field of causal discovery and point out future research directions.

Over the last several years, the field of natural language processing has been propelled forward by an explosion in the use of deep learning models. This survey provides a brief introduction to the field and a quick overview of deep learning architectures and methods. It then sifts through the plethora of recent studies and summarizes a large assortment of relevant contributions. Analyzed research areas include several core linguistic processing issues in addition to a number of applications of computational linguistics. A discussion of the current state of the art is then provided along with recommendations for future research in the field.

北京阿比特科技有限公司