In this work, we study the effects of feature-based explanations on distributive fairness of AI-assisted decisions, specifically focusing on the task of predicting occupations from short textual bios. We also investigate how any effects are mediated by humans' fairness perceptions and their reliance on AI recommendations. Our findings show that explanations influence fairness perceptions, which, in turn, relate to humans' tendency to adhere to AI recommendations. However, we see that such explanations do not enable humans to discern correct and incorrect AI recommendations. Instead, we show that they may affect reliance irrespective of the correctness of AI recommendations. Depending on which features an explanation highlights, this can foster or hinder distributive fairness: when explanations highlight features that are task-irrelevant and evidently associated with the sensitive attribute, this prompts overrides that counter AI recommendations that align with gender stereotypes. Meanwhile, if explanations appear task-relevant, this induces reliance behavior that reinforces stereotype-aligned errors. These results imply that feature-based explanations are not a reliable mechanism to improve distributive fairness.
In this work, we study the out-of-distribution (OOD) detection problem through the use of the feature space of a pre-trained deep classifier. We show that learning the density of in-distribution (ID) features with an energy-based models (EBM) leads to competitive detection results. However, we found that the non-mixing of MCMC sampling during the EBM's training undermines its detection performance. To overcome this an energy-based correction of a mixture of class-conditional Gaussian distributions. We obtains favorable results when compared to a strong baseline like the KNN detector on the CIFAR-10/CIFAR-100 OOD detection benchmarks.
In this work, we consider the problem of localizing multiple signal sources based on time-difference of arrival (TDOA) measurements. In the blind setting, in which the source signals are not known, the localization task is challenging due to the data association problem. That is, it is not known which of the TDOA measurements correspond to the same source. Herein, we propose to perform joint localization and data association by means of an optimal transport formulation. The method operates by finding optimal groupings of TDOA measurements and associating these with candidate source locations. To allow for computationally feasible localization in three-dimensional space, an efficient set of candidate locations is constructed using a minimal multilateration solver based on minimal sets of receiver pairs. In numerical simulations, we demonstrate that the proposed method is robust both to measurement noise and TDOA detection errors. Furthermore, it is shown that the data association provided by the proposed method allows for statistically efficient estimates of the source locations.
In this study, we address the intricate challenge of multi-task dense prediction, encompassing tasks such as semantic segmentation, depth estimation, and surface normal estimation, particularly when dealing with partially annotated data (MTPSL). The complexity arises from the absence of complete task labels for each training image. Given the inter-related nature of these pixel-wise dense tasks, our focus is on mining and capturing cross-task relationships. Existing solutions typically rely on learning global image representations for global cross-task image matching, imposing constraints that, unfortunately, sacrifice the finer structures within the images. Attempting local matching as a remedy faces hurdles due to the lack of precise region supervision, making local alignment a challenging endeavor. The introduction of Segment Anything Model (SAM) sheds light on addressing local alignment challenges by providing free and high-quality solutions for region detection. Leveraging SAM-detected regions, the subsequent challenge lies in aligning the representations within these regions. Diverging from conventional methods that directly learn a monolithic image representation, our proposal involves modeling region-wise representations using Gaussian Distributions. Aligning these distributions between corresponding regions from different tasks imparts higher flexibility and capacity to capture intra-region structures, accommodating a broader range of tasks. This innovative approach significantly enhances our ability to effectively capture cross-task relationships, resulting in improved overall performance in partially supervised multi-task dense prediction scenarios. Extensive experiments conducted on two widely used benchmarks underscore the superior effectiveness of our proposed method, showcasing state-of-the-art performance even when compared to fully supervised methods.
This work focuses on dimension-reduction techniques for modelling conditional extreme values. Specifically, we investigate the idea that extreme values of a response variable can be explained by nonlinear functions derived from linear projections of an input random vector. In this context, the estimation of projection directions is examined, as approached by the Extreme Partial Least Squares (EPLS) method--an adaptation of the original Partial Least Squares (PLS) method tailored to the extreme-value framework. Further, a novel interpretation of EPLS directions as maximum likelihood estimators is introduced, utilizing the von Mises-Fisher distribution applied to hyperballs. The dimension reduction process is enhanced through the Bayesian paradigm, enabling the incorporation of prior information into the projection direction estimation. The maximum a posteriori estimator is derived in two specific cases, elucidating it as a regularization or shrinkage of the EPLS estimator. We also establish its asymptotic behavior as the sample size approaches infinity. A simulation data study is conducted in order to assess the practical utility of our proposed method. This clearly demonstrates its effectiveness even in moderate data problems within high-dimensional settings. Furthermore, we provide an illustrative example of the method's applicability using French farm income data, highlighting its efficacy in real-world scenarios.
In this work, we present several tools for efficient sequential hierarchical least-squares programming (S-HLSP) for lexicographical optimization tailored to robot control and planning. As its main step, S-HLSP relies on approximations of the original non-linear hierarchical least-squares programming (NL-HLSP) to a hierarchical least-squares programming (HLSP) by the hierarchical Newton's method or the hierarchical Gauss-Newton algorithm. We present a threshold adaptation strategy for appropriate switches between the two. This ensures optimality of infeasible constraints, promotes numerical stability when solving the HLSP's and enhances optimality of lower priority levels by avoiding regularized local minima. We introduce the solver $\mathcal{N}$ADM$_2$, an alternating direction method of multipliers for HLSP based on nullspace projections of active constraints. The required basis of nullspace of the active constraints is provided by a computationally efficient turnback algorithm for system dynamics discretized by the Euler method. It is based on an upper bound on the bandwidth of linearly independent column subsets within the linearized constraint matrices. Importantly, an expensive initial rank-revealing matrix factorization is unnecessary. We show how the high sparsity of the basis in the fully-actuated case can be preserved in the under-actuated case. $\mathcal{N}$ADM$_2$ consistently shows faster computations times than competing off-the-shelf solvers on NL-HLSP composed of test-functions and whole-body trajectory optimization for fully-actuated and under-actuated robotic systems. We demonstrate how the inherently lower accuracy solutions of the alternating direction method of multipliers can be used to warm-start the non-linear solver for efficient computation of high accuracy solutions to non-linear hierarchical least-squares programs.
In this article, we introduce XPLORER, a passive deformable quadrotor optimized for performing contact-rich tasks by utilizing collision-induced deformation. We present a novel external force estimation technique, and advanced planning and control algorithms that exploit the compliant nature of XPLORER's chassis. These algorithms enable three distinct flight behaviors: static-wrench application, where XPLORER can exert desired forces and torque on surfaces for precise manipulation; disturbance rejection, wherein the quadrotor actively mitigates external forces and yaw disturbances to maintain its intended trajectory; and yielding to disturbance, enabling XPLORER to dynamically adapt its position and orientation to evade undesired forces, ensuring stable flight amidst unpredictable environmental factors. Leveraging these behaviors, we develop innovative mission strategies including tactile-traversal, tactile-turning, and collide-to-brake for contact-based exploration of unknown areas, contact-based mapping and swift navigation. Through experimental validation, we demonstrate the effectiveness of these strategies in enabling efficient exploration and rapid navigation in unknown environments, leveraging collisions as a means for feedback and control. This study contributes to the growing field of aerial robotics by showcasing the potential of passive deformable quadrotors for versatile and robust interaction tasks in real-world scenarios.
To address the challenge of choice congestion in matching markets, in this work, we introduce a two-sided assortment optimization framework under general choice preferences. The goal in this problem is to maximize the expected number of matches by deciding which assortments are displayed to the agents and the order in which they are shown. In this context, we identify several classes of policies that platforms can use in their design. Our goals are: (1) to measure the value that one class of policies has over another one, and (2) to approximately solve the optimization problem itself for a given class. For (1), we define the adaptivity gap as the worst-case ratio between the optimal values of two different policy classes. First, we show that the gap between the class of policies that statically show assortments to one-side first and the class of policies that adaptively show assortments to one-side first is exactly $1-1/e$. Second, we show that the gap between the latter class of policies and the fully adaptive class of policies that show assortments to agents one by one is exactly $1/2$. We also note that the worst policies are those who simultaneously show assortments to all the agents, in fact, we show that their adaptivity gap even with respect to one-sided static policies can be arbitrarily small. For (2), we first show that there exists a polynomial time policy that achieves a $1/4$ approximation factor within the class of policies that adaptively show assortments to agents one by one. Finally, when agents' preferences are governed by multinomial-logit models, we show that a 0.066 approximation factor can be obtained within the class of policies that show assortments to all agents at once.
In this work, we investigate the potential of large language models (LLMs) based agents to automate data science tasks, with the goal of comprehending task requirements, then building and training the best-fit machine learning models. Despite their widespread success, existing LLM agents are hindered by generating unreasonable experiment plans within this scenario. To this end, we present DS-Agent, a novel automatic framework that harnesses LLM agent and case-based reasoning (CBR). In the development stage, DS-Agent follows the CBR framework to structure an automatic iteration pipeline, which can flexibly capitalize on the expert knowledge from Kaggle, and facilitate consistent performance improvement through the feedback mechanism. Moreover, DS-Agent implements a low-resource deployment stage with a simplified CBR paradigm to adapt past successful solutions from the development stage for direct code generation, significantly reducing the demand on foundational capabilities of LLMs. Empirically, DS-Agent with GPT-4 achieves an unprecedented 100% success rate in the development stage, while attaining 36% improvement on average one pass rate across alternative LLMs in the deployment stage. In both stages, DS-Agent achieves the best rank in performance, costing \$1.60 and \$0.13 per run with GPT-4, respectively. Our code is open-sourced at //github.com/guosyjlu/DS-Agent.
This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.
Most existing works in visual question answering (VQA) are dedicated to improving the accuracy of predicted answers, while disregarding the explanations. We argue that the explanation for an answer is of the same or even more importance compared with the answer itself, since it makes the question and answering process more understandable and traceable. To this end, we propose a new task of VQA-E (VQA with Explanation), where the computational models are required to generate an explanation with the predicted answer. We first construct a new dataset, and then frame the VQA-E problem in a multi-task learning architecture. Our VQA-E dataset is automatically derived from the VQA v2 dataset by intelligently exploiting the available captions. We have conducted a user study to validate the quality of explanations synthesized by our method. We quantitatively show that the additional supervision from explanations can not only produce insightful textual sentences to justify the answers, but also improve the performance of answer prediction. Our model outperforms the state-of-the-art methods by a clear margin on the VQA v2 dataset.