Many high-stakes decision-making problems, such as those found within cybersecurity and economics, can be modeled as competitive resource allocation games. In these games, multiple players must allocate limited resources to overcome their opponent(s), while minimizing any induced individual losses. However, existing means of assessing the performance of resource allocation algorithms are highly disparate and problem-dependent. As a result, evaluating such algorithms is unreliable or impossible in many contexts and applications, especially when considering differing levels of feedback. To resolve this problem, we propose a generalized definition of payoff which uses an arbitrary user-provided function. This unifies performance evaluation under all contexts and levels of feedback. Using this definition, we develop metrics for evaluating player performance, and estimators to approximate them under uncertainty (i.e., bandit or semi-bandit feedback). These metrics and their respective estimators provide a problem-agnostic means to contextualize and evaluate algorithm performance. To validate the accuracy of our estimator, we explore the Colonel Blotto ($\mathcal{CB}$) game as an example. To this end, we propose a graph-pruning approach to efficiently identify feasible opponent decisions, which are used in computing our estimation metrics. Using various resource allocation algorithms and game parameters, a suite of $\mathcal{CB}$ games are simulated and used to compute and evaluate the quality of our estimates. These simulations empirically show our approach to be highly accurate at estimating the metrics associated with the unseen outcomes of an opponent's latent behavior.
This paper establishes a combinatorial central limit theorem for stratified randomization that holds under Lindeberg-type conditions and allows for a growing number of large and small strata. The result is then applied to derive the asymptotic distributions of two test statistics proposed in a finite population setting with randomly assigned instruments and a super population instrumental variables model, both having many strata.
In interactive systems, actions are often correlated, presenting an opportunity for more sample-efficient off-policy evaluation (OPE) and learning (OPL) in large action spaces. We introduce a unified Bayesian framework to capture these correlations through structured and informative priors. In this framework, we propose sDM, a generic Bayesian approach designed for OPE and OPL, grounded in both algorithmic and theoretical foundations. Notably, sDM leverages action correlations without compromising computational efficiency. Moreover, inspired by online Bayesian bandits, we introduce Bayesian metrics that assess the average performance of algorithms across multiple problem instances, deviating from the conventional worst-case assessments. We analyze sDM in OPE and OPL, highlighting the benefits of leveraging action correlations. Empirical evidence showcases the strong performance of sDM.
We propose a noble, comprehensive and robust agile requirements change management (ARCM) model that addresses the limitations of existing models and is tailored for agile software development in the global software development paradigm. To achieve this goal, we conducted an exhaustive literature review and an empirical study with RCM industry experts. Our study evaluated the effectiveness of the proposed RCM model in a real-world setting and identifies any limitations or areas for improvement. The results of our study provide valuable insights into how the proposed RCM model can be applied in agile global software development environments to improve software development practices and optimize project success rates.
In financial modeling problems, non-Gaussian tails exist widely in many circumstances. Among them, the accurate estimation of risk-neutral distribution (RND) from option prices is of great importance for researchers and practitioners. A precise RND can provide valuable information regarding the market's expectations, and can further help empirical asset pricing studies. This paper presents a parsimonious parametric approach to extract RNDs of underlying asset returns by using a generative machine learning model. The model incorporates the asymmetric heavy tails property of returns with a clever design. To calibrate the model, we design a Monte Carlo algorithm that has good capability with the assistance of modern machine learning computing tools. Numerically, the model fits Heston option prices well and captures the main shapes of implied volatility curves. Empirically, using S\&P 500 index option prices, we demonstrate that the model outperforms some popular parametric density methods under mean absolute error. Furthermore, the skewness and kurtosis of RNDs extracted by our model are consistent with intuitive expectations.
The process of drawing electoral district boundaries is known as political redistricting. Within this context, gerrymandering is the practice of drawing these boundaries such that they unfairly favor a particular political party, often leading to unequal representation and skewed electoral outcomes. One of the few ways to detect gerrymandering is by algorithmically sampling redistricting plans. Previous methods mainly focus on sampling from some neighborhood of ``realistic' districting plans, rather than a uniform sample of the entire space. We present a deterministic subexponential time algorithm to uniformly sample from the space of all possible $ k $-partitions of a bounded degree planar graph, and with this construct a sample of the entire space of redistricting plans. We also give a way to restrict this sample space to plans that match certain compactness and population constraints at the cost of added complexity. The algorithm runs in $ 2^{O(\sqrt{n}\log n)} $ time, although we only give a heuristic implementation. Our method generalizes an algorithm to count self-avoiding walks on a square to count paths that split general planar graphs into $ k $ regions, and uses this to sample from the space of all $ k $-partitions of a planar graph.
We reveal and address the frequently overlooked yet important issue of disguised procedural unfairness, namely, the potentially inadvertent alterations on the behavior of neutral (i.e., not problematic) aspects of data generating process, and/or the lack of procedural assurance of the greatest benefit of the least advantaged individuals. Inspired by John Rawls's advocacy for pure procedural justice, we view automated decision-making as a microcosm of social institutions, and consider how the data generating process itself can satisfy the requirements of procedural fairness. We propose a framework that decouples the objectionable data generating components from the neutral ones by utilizing reference points and the associated value instantiation rule. Our findings highlight the necessity of preventing disguised procedural unfairness, drawing attention not only to the objectionable data generating components that we aim to mitigate, but also more importantly, to the neutral components that we intend to keep unaffected.
We propose a new joint mean and correlation regression model for correlated multivariate discrete responses, that simultaneously regresses the mean of each response against a set of covariates, and the correlations between responses against a set of similarity/distance measures. A set of joint estimating equations are formulated to construct an estimator of both the mean regression coefficients and the correlation regression parameters. Under a general setting where the number of responses can tend to infinity, the joint estimator is demonstrated to be consistent and asymptotically normally distributed, with differing rates of convergence due to the mean regression coefficients being heterogeneous across responses. An iterative estimation procedure is developed to obtain parameter estimates in the required, constrained parameter space. We apply the proposed model to a multivariate abundance dataset comprising overdispersed counts of 38 Carabidae ground beetle species sampled throughout Scotland, along with information about the environmental conditions of each site and the traits of each species. Results show in particular that the relationships between the mean abundances of various beetle species and environmental covariates are different and that beetle total length has statistically important effect in driving the correlations between the species. Simulations demonstrate the strong finite sample performance of the proposed estimator in terms of point estimation and inference.
With the rapid development of facial forgery techniques, forgery detection has attracted more and more attention due to security concerns. Existing approaches attempt to use frequency information to mine subtle artifacts under high-quality forged faces. However, the exploitation of frequency information is coarse-grained, and more importantly, their vanilla learning process struggles to extract fine-grained forgery traces. To address this issue, we propose a progressive enhancement learning framework to exploit both the RGB and fine-grained frequency clues. Specifically, we perform a fine-grained decomposition of RGB images to completely decouple the real and fake traces in the frequency space. Subsequently, we propose a progressive enhancement learning framework based on a two-branch network, combined with self-enhancement and mutual-enhancement modules. The self-enhancement module captures the traces in different input spaces based on spatial noise enhancement and channel attention. The Mutual-enhancement module concurrently enhances RGB and frequency features by communicating in the shared spatial dimension. The progressive enhancement process facilitates the learning of discriminative features with fine-grained face forgery clues. Extensive experiments on several datasets show that our method outperforms the state-of-the-art face forgery detection methods.
Graph Neural Networks (GNNs) have recently become increasingly popular due to their ability to learn complex systems of relations or interactions arising in a broad spectrum of problems ranging from biology and particle physics to social networks and recommendation systems. Despite the plethora of different models for deep learning on graphs, few approaches have been proposed thus far for dealing with graphs that present some sort of dynamic nature (e.g. evolving features or connectivity over time). In this paper, we present Temporal Graph Networks (TGNs), a generic, efficient framework for deep learning on dynamic graphs represented as sequences of timed events. Thanks to a novel combination of memory modules and graph-based operators, TGNs are able to significantly outperform previous approaches being at the same time more computationally efficient. We furthermore show that several previous models for learning on dynamic graphs can be cast as specific instances of our framework. We perform a detailed ablation study of different components of our framework and devise the best configuration that achieves state-of-the-art performance on several transductive and inductive prediction tasks for dynamic graphs.
Multi-relation Question Answering is a challenging task, due to the requirement of elaborated analysis on questions and reasoning over multiple fact triples in knowledge base. In this paper, we present a novel model called Interpretable Reasoning Network that employs an interpretable, hop-by-hop reasoning process for question answering. The model dynamically decides which part of an input question should be analyzed at each hop; predicts a relation that corresponds to the current parsed results; utilizes the predicted relation to update the question representation and the state of the reasoning process; and then drives the next-hop reasoning. Experiments show that our model yields state-of-the-art results on two datasets. More interestingly, the model can offer traceable and observable intermediate predictions for reasoning analysis and failure diagnosis.