Properties of stable matchings in the popular random-matching-market model have been studied for over 50 years. In a random matching market, each agent has complete preferences drawn uniformly and independently at random. Wilson (1972), Knuth (1976) and Pittel (1989) proved that in balanced random matching markets, the proposers are matched to their $\ln n$th choice on average. In this paper, we consider markets where agents have partial (truncated) preferences, that is, the proposers only rank their top $d$ partners. Despite the long history of the problem, the following fundamental question remained unanswered: \emph{what is the smallest value of $d$ that results in a perfect stable matching with high probability?} In this paper, we answer this question exactly -- we prove that a degree of $\ln^2 n$ is necessary and sufficient. That is, we show that if $d < (1-\epsilon) \ln^2 n$ then no stable matching is perfect and if $d > (1+ \epsilon) \ln^2 n$, then every stable matching is perfect with high probability. This settles a recent conjecture by Kanoria, Min and Qian (2021). We generalize this threshold for unbalanced markets: we consider a matching market with $n$ agents on the shorter side and $n(\alpha+1)$ agents on the longer side. We show that for markets with $\alpha =o(1)$, the sharp threshold characterizing the existence of perfect stable matching occurs when $d$ is $\ln n \cdot \ln \left(\frac{1 + \alpha}{\alpha + (1/n(\alpha+1))} \right)$. Finally, we extend the line of work studying the effect of imbalance on the expected rank of the proposers (termed the ``stark effect of competition''). We establish the regime in unbalanced markets that forces this stark effect to take shape in markets with partial preferences.
Generating accurate Structured Querying Language (SQL) is a long-standing problem, especially in matching users' semantic queries with structured databases and then generating structured SQL. Existing models typically input queries and database schemas into the LLM and rely on the LLM to perform semantic-structure matching and generate structured SQL. However, such solutions overlook the structural information within user queries and databases, which can be utilized to enhance the generation of structured SQL. This oversight can lead to inaccurate or unexecutable SQL generation. To fully exploit the structure, we propose a structure-to-SQL framework, which leverages the inherent structure information to improve the SQL generation of LLMs. Specifically, we introduce our Structure Guided SQL~(SGU-SQL) generation model. SGU-SQL first links user queries and databases in a structure-enhanced manner. It then decomposes complicated linked structures with grammar trees to guide the LLM to generate the SQL step by step. Extensive experiments on two benchmark datasets illustrate that SGU-SQL can outperform sixteen SQL generation baselines.
Mitigating the disparate impact of statistical machine learning methods is crucial for ensuring fairness. While extensive research aims to reduce disparity, the effect of using a \emph{finite dataset} -- as opposed to the entire population -- remains unclear. This paper explores the statistical foundations of fair binary classification with two protected groups, focusing on controlling demographic disparity, defined as the difference in acceptance rates between the groups. Although fairness may come at the cost of accuracy even with infinite data, we show that using a finite sample incurs additional costs due to the need to estimate group-specific acceptance thresholds. We study the minimax optimal classification error while constraining demographic disparity to a user-specified threshold. To quantify the impact of fairness constraints, we introduce a novel measure called \emph{fairness-aware excess risk} and derive a minimax lower bound on this measure that all classifiers must satisfy. Furthermore, we propose FairBayes-DDP+, a group-wise thresholding method with an offset that we show attains the minimax lower bound. Our lower bound proofs involve several innovations. Experiments support that FairBayes-DDP+ controls disparity at the user-specified level, while being faster and having a more favorable fairness-accuracy tradeoff than several baselines.
The design of online algorithms for matching markets and revenue management settings is usually bound by the stochastic prior that the demand process is formed by a fixed-length sequence of queries with unknown types, each drawn independently. This assumption of {\em serial independence} implies that the demand of each type, i.e., the number of queries of a given type, has low variance and is approximately Poisson-distributed. This paper explores more general stochastic models for online edge-weighted matching that depart from the serial independence assumption. We propose two new models, \Indep and \Correl, that capture different forms of serial correlations by combining a nonparametric distribution for the demand with standard assumptions on the arrival patterns -- adversarial or random order. The \Indep model has arbitrary marginal distributions for the demands but assumes cross-sectional independence for the customer types, whereas the \Correl model captures common shocks across customer types. We demonstrate that fluid relaxations, which rely solely on expected demand information, have arbitrarily bad performance guarantees. In contrast, we develop new algorithms that essentially achieve optimal constant-factor performance guarantees in each model. Our mathematical analysis includes tighter linear programming relaxations that leverage distribution knowledge, and a new lossless randomized rounding scheme in the case of $\Indep$. In numerical simulations of the $\Indep$ model, we find that tighter relaxations are beneficial under high-variance demand and that our demand-aware rounding scheme can outperform stockout-aware rounding.
Mixed initiative serves as one of the key factors in controlling conversation directions. For a speaker, responding passively or leading proactively would result in rather different responses. However, most dialogue systems focus on training a holistic response generation model without any distinction among different initiatives. It leads to the cross-contamination problem, where the model confuses different initiatives and generates inappropriate responses. Moreover, obtaining plenty of human annotations for initiative labels can be expensive. To address this issue, we propose a general mix-Initiative Dynamic Prefix Tuning framework (IDPT) to decouple different initiatives from the generation model, which learns initiative-aware prefixes in both supervised and unsupervised settings. Specifically, IDPT decouples initiative factors into different prefix parameters and uses the attention mechanism to adjust the selection of initiatives in guiding generation dynamically. The prefix parameters can be tuned towards accurate initiative prediction as well as mix-initiative response generation. Extensive experiments on two public dialogue datasets show that the proposed IDPT outperforms previous baselines on both automatic metrics and human evaluations. It also manages to generate appropriate responses with manipulated initiatives.
Entity matching is a critical challenge in data integration and cleaning, central to tasks like fuzzy joins and deduplication. Traditional approaches have focused on overcoming fuzzy term representations through methods such as edit distance, Jaccard similarity, and more recently, embeddings and deep neural networks, including advancements from large language models (LLMs) like GPT. However, the core challenge in entity matching extends beyond term fuzziness to the ambiguity in defining what constitutes a "match," especially when integrating with external databases. This ambiguity arises due to varying levels of detail and granularity among entities, complicating exact matches. We propose a novel approach that shifts focus from purely identifying semantic similarities to understanding and defining the "relations" between entities as crucial for resolving ambiguities in matching. By predefining a set of relations relevant to the task at hand, our method allows analysts to navigate the spectrum of similarity more effectively, from exact matches to conceptually related entities.
Detecting and mitigating Radio Frequency Interference (RFI) is critical for enabling and maximising the scientific output of radio telescopes. The emergence of machine learning methods has led to their application in radio astronomy, and in RFI detection. Spiking Neural Networks (SNNs), inspired by biological systems, are well-suited for processing spatio-temporal data. This study introduces the first exploratory application of SNNs to an astronomical data-processing task, specifically RFI detection. We adapt the nearest-latent-neighbours (NLN) algorithm and auto-encoder architecture proposed by previous authors to SNN execution by direct ANN2SNN conversion, enabling simplified downstream RFI detection by sampling the naturally varying latent space from the internal spiking neurons. Our subsequent evaluation aims to determine whether SNNs are viable for future RFI detection schemes. We evaluate detection performance with the simulated HERA telescope and hand-labelled LOFAR observation dataset the original authors provided. We additionally evaluate detection performance with a new MeerKAT-inspired simulation dataset that provides a technical challenge for machine-learnt RFI detection methods. This dataset focuses on satellite-based RFI, an increasingly important class of RFI and is an additional contribution. Our approach remains competitive with existing methods in AUROC, AUPRC and F1 scores for the HERA dataset but exhibits difficulty in the LOFAR and Tabascal datasets. Our method maintains this accuracy while completely removing the compute and memory-intense latent sampling step found in NLN. This work demonstrates the viability of SNNs as a promising avenue for machine-learning-based RFI detection in radio telescopes by establishing a minimal performance baseline on traditional and nascent satellite-based RFI sources and is the first work to our knowledge to apply SNNs in astronomy.
During the Second World War, estimates of the number of tanks deployed by Germany were critically needed. The Allies adopted two methods to estimate this information: espionage and statistical analysis. The latter approach was far more successful and is as follows: assuming that the tanks are sequentially numbered starting from 1, if we observe $k$ serial numbers from an unknown total of $N$ tanks, with the highest observed number being $M$, then the best linear unbiased estimator for $N$ is $M(1+1/k)-1$. This is now known as the German Tank Problem. Suppose one wishes to estimate the productivity of a rival by inspecting captured or destroyed tanks, each with a unique serial number. In many situations, the original German Tank Problem is insufficient, since typically there are $l>1$ factories, and tanks produced by different factories may have serial numbers in disjoint ranges that are often far separated, let alone sequentially numbered starting from 1. We wish to estimate the total tank production across all of the factories. We construct an efficient procedure to estimate the total productivity and prove that our procedure effectively estimates $N$ when $\log l/\log k$ is sufficiently small, and is robust against both large and small gaps between factories. In the final section, we show that given information about the gaps, we can make a far better estimator that is also effective when we have a small number of samples. When the number of samples is small compared to the number of gaps, the Mean Squared Error of this new estimator is several orders of magnitude smaller than the one that assumes no information. This quantifies the importance of hiding such information if one wishes to conceal their productivity from a rival.
The concept of causality plays an important role in human cognition . In the past few decades, causal inference has been well developed in many fields, such as computer science, medicine, economics, and education. With the advancement of deep learning techniques, it has been increasingly used in causal inference against counterfactual data. Typically, deep causal models map the characteristics of covariates to a representation space and then design various objective optimization functions to estimate counterfactual data unbiasedly based on the different optimization methods. This paper focuses on the survey of the deep causal models, and its core contributions are as follows: 1) we provide relevant metrics under multiple treatments and continuous-dose treatment; 2) we incorporate a comprehensive overview of deep causal models from both temporal development and method classification perspectives; 3) we assist a detailed and comprehensive classification and analysis of relevant datasets and source code.
Embedding models for deterministic Knowledge Graphs (KG) have been extensively studied, with the purpose of capturing latent semantic relations between entities and incorporating the structured knowledge into machine learning. However, there are many KGs that model uncertain knowledge, which typically model the inherent uncertainty of relations facts with a confidence score, and embedding such uncertain knowledge represents an unresolved challenge. The capturing of uncertain knowledge will benefit many knowledge-driven applications such as question answering and semantic search by providing more natural characterization of the knowledge. In this paper, we propose a novel uncertain KG embedding model UKGE, which aims to preserve both structural and uncertainty information of relation facts in the embedding space. Unlike previous models that characterize relation facts with binary classification techniques, UKGE learns embeddings according to the confidence scores of uncertain relation facts. To further enhance the precision of UKGE, we also introduce probabilistic soft logic to infer confidence scores for unseen relation facts during training. We propose and evaluate two variants of UKGE based on different learning objectives. Experiments are conducted on three real-world uncertain KGs via three tasks, i.e. confidence prediction, relation fact ranking, and relation fact classification. UKGE shows effectiveness in capturing uncertain knowledge by achieving promising results on these tasks, and consistently outperforms baselines on these tasks.
It is important to detect anomalous inputs when deploying machine learning systems. The use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples. At the same time, diverse image and text data are available in enormous quantities. We propose leveraging these data to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure (OE). This enables anomaly detectors to generalize and detect unseen anomalies. In extensive experiments on natural language processing and small- and large-scale vision tasks, we find that Outlier Exposure significantly improves detection performance. We also observe that cutting-edge generative models trained on CIFAR-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; we use OE to mitigate this issue. We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance.