Even though research has repeatedly shown that non-cash incentives can be effective, cash incentives are the de facto standard in crowdsourcing contests. In this multi-study research, we quantify ideators' preferences for non-cash incentives and investigate how allowing ideators to self-select their preferred incentive -- offering ideators a choice between cash and non-cash incentives -- affects their creative performance. We further explore whether the market context of the organization hosting the contest -- social (non-profit) or monetary (for-profit) -- moderates incentive preferences and their effectiveness. We find that individuals exhibit heterogeneous incentive preferences and often prefer non-cash incentives, even in for-profit contexts. Offering ideators a choice of incentives can enhance creative performance. Market context moderates the effect of incentives, such that ideators who receive non-cash incentives in for-profit contexts tend to exert less effort. We show that heterogeneity of ideators' preferences (and the ability to satisfy diverse preferences with suitably diverse incentive options) is a critical boundary condition to realizing benefits from offering ideators a choice of incentives. We provide managers with guidance to design effective incentives by improving incentive-preference fit for ideators.
Researchers have focused on understanding how individual's behavior is influenced by the behaviors of their peers in observational studies of social networks. Identifying and estimating causal peer influence, however, is challenging due to confounding by homophily, where people tend to connect with those who share similar characteristics with them. Moreover, since all the attributes driving homophily are generally not always observed and act as unobserved confounders, identifying and estimating causal peer influence becomes infeasible using standard causal identification assumptions. In this paper, we address this challenge by leveraging latent locations inferred from the network itself to disentangle homophily from causal peer influence, and we extend this approach to multiple networks by adopting a Bayesian hierarchical modeling framework. To accommodate the nonlinear dependency of peer influence on individual behavior, we employ a Bayesian nonparametric method, specifically Bayesian Additive Regression Trees (BART), and we propose a Bayesian framework that accounts for the uncertainty in inferring latent locations. We assess the operating characteristics of the estimator via extensive simulation study. Finally, we apply our method to estimate causal peer influence in advice-seeking networks of teachers in secondary schools, in order to assess whether the teachers' belief about mathematics education is influenced by the beliefs of their peers from whom they receive advice. Our results suggest that, overlooking latent homophily can lead to either underestimation or overestimation of causal peer influence, accompanied by considerable estimation uncertainty.
In a context of a continuous digitalisation of processes, organisations must deal with the challenge of detecting anomalies that can reveal suspicious activities upon an increasing volume of data. To pursue this goal, audit engagements are carried out regularly, and internal auditors and purchase specialists are constantly looking for new methods to automate these processes. This work proposes a methodology to prioritise the investigation of the cases detected in two large purchase datasets from real data. The goal is to contribute to the effectiveness of the companies' control efforts and to increase the performance of carrying out such tasks. A comprehensive Exploratory Data Analysis is carried out before using unsupervised Machine Learning techniques addressed to detect anomalies. A univariate approach has been applied through the z-Score index and the DBSCAN algorithm, while a multivariate analysis is implemented with the k-Means and Isolation Forest algorithms, and the Silhouette index, resulting in each method having a transaction candidates' proposal to be reviewed. An ensemble prioritisation of the candidates is provided jointly with a proposal of explicability methods (LIME, Shapley, SHAP) to help the company specialists in their understanding.
The rapid development of collaborative robotics has provided a new possibility of helping the elderly who has difficulties in daily life, allowing robots to operate according to specific intentions. However, efficient human-robot cooperation requires natural, accurate and reliable intention recognition in shared environments. The current paramount challenge for this is reducing the uncertainty of multimodal fused intention to be recognized and reasoning adaptively a more reliable result despite current interactive condition. In this work we propose a novel learning-based multimodal fusion framework Batch Multimodal Confidence Learning for Opinion Pool (BMCLOP). Our approach combines Bayesian multimodal fusion method and batch confidence learning algorithm to improve accuracy, uncertainty reduction and success rate given the interactive condition. In particular, the generic and practical multimodal intention recognition framework can be easily extended further. Our desired assistive scenarios consider three modalities gestures, speech and gaze, all of which produce categorical distributions over all the finite intentions. The proposed method is validated with a six-DoF robot through extensive experiments and exhibits high performance compared to baselines.
Neural network-based approaches have recently shown significant promise in solving partial differential equations (PDEs) in science and engineering, especially in scenarios featuring complex domains or the incorporation of empirical data. One advantage of the neural network method for PDEs lies in its automatic differentiation (AD), which necessitates only the sample points themselves, unlike traditional finite difference (FD) approximations that require nearby local points to compute derivatives. In this paper, we quantitatively demonstrate the advantage of AD in training neural networks. The concept of truncated entropy is introduced to characterize the training property. Specifically, through comprehensive experimental and theoretical analyses conducted on random feature models and two-layer neural networks, we discover that the defined truncated entropy serves as a reliable metric for quantifying the residual loss of random feature models and the training speed of neural networks for both AD and FD methods. Our experimental and theoretical analyses demonstrate that, from a training perspective, AD outperforms FD in solving partial differential equations.
Data uncertainties, such as sensor noise, occlusions or limitations in the acquisition method can introduce irreducible ambiguities in images, which result in varying, yet plausible, semantic hypotheses. In Machine Learning, this ambiguity is commonly referred to as aleatoric uncertainty. In image segmentation, latent density models can be utilized to address this problem. The most popular approach is the Probabilistic U-Net (PU-Net), which uses latent Normal densities to optimize the conditional data log-likelihood Evidence Lower Bound. In this work, we demonstrate that the PU-Net latent space is severely sparse and heavily under-utilized. To address this, we introduce mutual information maximization and entropy-regularized Sinkhorn Divergence in the latent space to promote homogeneity across all latent dimensions, effectively improving gradient-descent updates and latent space informativeness. Our results show that by applying this on public datasets of various clinical segmentation problems, our proposed methodology receives up to 11% performance gains compared against preceding latent variable models for probabilistic segmentation on the Hungarian-Matched Intersection over Union. The results indicate that encouraging a homogeneous latent space significantly improves latent density modeling for medical image segmentation.
Leveraging complementary relationships across modalities has recently drawn a lot of attention in multimodal emotion recognition. Most of the existing approaches explored cross-attention to capture the complementary relationships across the modalities. However, the modalities may also exhibit weak complementary relationships, which may deteriorate the cross-attended features, resulting in poor multimodal feature representations. To address this problem, we propose Inconsistency-Aware Cross-Attention (IACA), which can adaptively select the most relevant features on-the-fly based on the strong or weak complementary relationships across audio and visual modalities. Specifically, we design a two-stage gating mechanism that can adaptively select the appropriate relevant features to deal with weak complementary relationships. Extensive experiments are conducted on the challenging Aff-Wild2 dataset to show the robustness of the proposed model.
Time-dependent convection-diffusion problems is considered, particularly when the diffusivity is very small and sharp layers exist in the solutions. Nonphysical oscillations may occur in the numerical solutions when using regular mesh with standard computational methods. In this work, we develop a moving mesh SUPG (MM-SUPG) method, which integrates the streamline upwind Petrov-Galerkin (SUPG) method with the moving mesh partial differential equation (MMPDE) approach. The proposed method is designed to handle both isotropic and anisotropic diffusivity tensors. For the isotropic case, we focus on improving the stability of the numerical solution by utilizing both artificial diffusion from SUPG and mesh adaptation from MMPDE. And for the anisotropic case, we focus on the positivity of the numerical solution. We introduce a weighted diffusion tensor and develop a new metric tensor to control the mesh movement. We also develop conditions for time step size so that the numerical solution satisfies the discrete maximum principle (DMP). Numerical results demonstrate that the proposed MM-SUPG method provides results better than SUPG with fixed mesh or moving mesh without SUPG.
The existence of representative datasets is a prerequisite of many successful artificial intelligence and machine learning models. However, the subsequent application of these models often involves scenarios that are inadequately represented in the data used for training. The reasons for this are manifold and range from time and cost constraints to ethical considerations. As a consequence, the reliable use of these models, especially in safety-critical applications, is a huge challenge. Leveraging additional, already existing sources of knowledge is key to overcome the limitations of purely data-driven approaches, and eventually to increase the generalization capability of these models. Furthermore, predictions that conform with knowledge are crucial for making trustworthy and safe decisions even in underrepresented scenarios. This work provides an overview of existing techniques and methods in the literature that combine data-based models with existing knowledge. The identified approaches are structured according to the categories integration, extraction and conformity. Special attention is given to applications in the field of autonomous driving.
Deep reinforcement learning algorithms can perform poorly in real-world tasks due to the discrepancy between source and target environments. This discrepancy is commonly viewed as the disturbance in transition dynamics. Many existing algorithms learn robust policies by modeling the disturbance and applying it to source environments during training, which usually requires prior knowledge about the disturbance and control of simulators. However, these algorithms can fail in scenarios where the disturbance from target environments is unknown or is intractable to model in simulators. To tackle this problem, we propose a novel model-free actor-critic algorithm -- namely, state-conservative policy optimization (SCPO) -- to learn robust policies without modeling the disturbance in advance. Specifically, SCPO reduces the disturbance in transition dynamics to that in state space and then approximates it by a simple gradient-based regularizer. The appealing features of SCPO include that it is simple to implement and does not require additional knowledge about the disturbance or specially designed simulators. Experiments in several robot control tasks demonstrate that SCPO learns robust policies against the disturbance in transition dynamics.
Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.