We show that established performance metrics in binary classification, such as the F-score, the Jaccard similarity coefficient or Matthews' correlation coefficient (MCC), are not robust to class imbalance in the sense that if the proportion of the minority class tends to $0$, the true positive rate (TPR) of the Bayes classifier under these metrics tends to $0$ as well. Thus, in imbalanced classification problems, these metrics favour classifiers which ignore the minority class. To alleviate this issue we introduce robust modifications of the F-score and the MCC for which, even in strongly imbalanced settings, the TPR is bounded away from $0$. We numerically illustrate the behaviour of the various performance metrics in simulations as well as on a credit default data set. We also discuss connections to the ROC and precision-recall curves and give recommendations on how to combine their usage with performance metrics.
Random probabilities are a key component to many nonparametric methods in Statistics and Machine Learning. To quantify comparisons between different laws of random probabilities several works are starting to use the elegant Wasserstein over Wasserstein distance. In this paper we prove that the infinite dimensionality of the space of probabilities drastically deteriorates its sample complexity, which is slower than any polynomial rate in the sample size. We propose a new distance that preserves many desirable properties of the former while achieving a parametric rate of convergence. In particular, our distance 1) metrizes weak convergence; 2) can be estimated numerically through samples with low complexity; 3) can be bounded analytically from above and below. The main ingredient are integral probability metrics, which lead to the name hierarchical IPM.
Untargeted metabolomic profiling through liquid chromatography-mass spectrometry (LC-MS) measures a vast array of metabolites within biospecimens, advancing drug development, disease diagnosis, and risk prediction. However, the low throughput of LC-MS poses a major challenge for biomarker discovery, annotation, and experimental comparison, necessitating the merging of multiple datasets. Current data pooling methods encounter practical limitations due to their vulnerability to data variations and hyperparameter dependence. Here we introduce GromovMatcher, a flexible and user-friendly algorithm that automatically combines LC-MS datasets using optimal transport. By capitalizing on feature intensity correlation structures, GromovMatcher delivers superior alignment accuracy and robustness compared to existing approaches. This algorithm scales to thousands of features requiring minimal hyperparameter tuning. Manually curated datasets for validating alignment algorithms are limited in the field of untargeted metabolomics, and hence we develop a dataset split procedure to generate pairs of validation datasets to test the alignments produced by GromovMatcher and other methods. Applying our method to experimental patient studies of liver and pancreatic cancer, we discover shared metabolic features related to patient alcohol intake, demonstrating how GromovMatcher facilitates the search for biomarkers associated with lifestyle risk factors linked to several cancer types.
Retrieval-augmented machine translation leverages examples from a translation memory by retrieving similar instances. These examples are used to condition the predictions of a neural decoder. We aim to improve the upstream retrieval step and consider a fixed downstream edit-based model: the multi-Levenshtein Transformer. The task consists of finding a set of examples that maximizes the overall coverage of the source sentence. To this end, we rely on the theory of submodular functions and explore new algorithms to optimize this coverage. We evaluate the resulting performance gains for the machine translation task.
Data assimilation algorithms integrate prior information from numerical model simulations with observed data. Ensemble-based filters, regarded as state-of-the-art, are widely employed for large-scale estimation tasks in disciplines such as geoscience and meteorology. Despite their inability to produce the true posterior distribution for nonlinear systems, their robustness and capacity for state tracking are noteworthy. In contrast, Particle filters yield the correct distribution in the ensemble limit but require substantially larger ensemble sizes than ensemble-based filters to maintain stability in higher-dimensional spaces. It is essential to transcend traditional Gaussian assumptions to achieve realistic quantification of uncertainties. One approach involves the hybridisation of filters, facilitated by tempering, to harness the complementary strengths of different filters. A new adaptive tempering method is proposed to tune the underlying schedule, aiming to systematically surpass the performance previously achieved. Although promising numerical results for certain filter combinations in toy examples exist in the literature, the tuning of hyperparameters presents a considerable challenge. A deeper understanding of these interactions is crucial for practical applications.
Existing approaches for device placement ignore the topological features of computation graphs and rely mostly on heuristic methods for graph partitioning. At the same time, they either follow a grouper-placer or an encoder-placer architecture, which requires understanding the interaction structure between code operations. To bridge the gap between encoder-placer and grouper-placer techniques, we propose a novel framework for the task of device placement, relying on smaller computation graphs extracted from the OpenVINO toolkit using reinforcement learning. The framework consists of five steps, including graph coarsening, node representation learning and policy optimization. It facilitates end-to-end training and takes into consideration the directed and acyclic nature of the computation graphs. We also propose a model variant, inspired by graph parsing networks and complex network analysis, enabling graph representation learning and personalized graph partitioning jointly, using an unspecified number of groups. To train the entire framework, we utilize reinforcement learning techniques by employing the execution time of the suggested device placements to formulate the reward. We demonstrate the flexibility and effectiveness of our approach through multiple experiments with three benchmark models, namely Inception-V3, ResNet, and BERT. The robustness of the proposed framework is also highlighted through an ablation study. The suggested placements improve the inference speed for the benchmark models by up to $58.2\%$ over CPU execution and by up to $60.24\%$ compared to other commonly used baselines.
This paper deals with the algorithmic aspects of solving feasibility problems of semidefinite programming (SDP), aka linear matrix inequalities (LMI). Since in some SDP instances all feasible solutions have irrational entries, numerical solvers that work with rational numbers can only find an approximate solution. We study the following question: is it possible to certify feasibility of a given SDP using an approximate solution that is sufficiently close to some exact solution? Existing approaches make the assumption that there exist rational feasible solutions (and use techniques such as rounding and lattice reduction algorithms). We propose an alternative approach that does not need this assumption. More specifically, we show how to construct a system of polynomial equations whose set of real solutions is guaranteed to have an isolated correct solution (assuming that the target exact solution is maximum-rank). This allows, in particular, to use algorithms from real algebraic geometry for solving systems of polynomial equations, yielding a hybrid (or symbolic-numerical) method for SDPs. We experimentally compare it with a pure symbolic method in [Henrion, Naldi, Safey El Din; SIAM J. Optim., 2016]; the hybrid method was able to certify feasibility of many SDP instances on which [Henrion, Naldi, Safey El Din; SIAM J. Optim., 2016] failed. We argue that our approach may have other uses, such as refining an approximate solution using methods of numerical algebraic geometry for systems of polynomial equations.
With reference to a binary outcome and a binary mediator, we derive identification bounds for natural effects under a reduced set of assumptions. Specifically, no assumptions about confounding are made that involve the outcome; we only assume no unobserved exposure-mediator confounding as well as a condition termed partially constant cross-world dependence (PC-CWD), which poses fewer constraints on the counterfactual probabilities than the usual cross-world independence assumption. The proposed strategy can be used also to achieve interval identification of the total effect, which is no longer point identified under the considered set of assumptions. Our derivations are based on postulating a logistic regression model for the mediator as well as for the outcome. However, in both cases the functional form governing the dependence on the explanatory variables is allowed to be arbitrary, thereby resulting in a semi-parametric approach. To account for sampling variability, we provide delta-method approximations of standard errors in order to build uncertainty intervals from identification bounds. The proposed method is applied to a dataset gathered from a Spanish prospective cohort study. The aim is to evaluate whether the effect of smoking on lung cancer risk is mediated by the onset of pulmonary emphysema.
In this paper, we propose a weak Galerkin (WG) finite element method for the Maxwell eigenvalue problem. By restricting subspaces, we transform the mixed form of Maxwell eigenvalue problem into simple elliptic equation. Then we give the WG numerical scheme for the Maxwell eigenvalue problem. Furthermore, we obtain the optimal error estimates of arbitrarily high convergence order and prove the lower bound property of numerical solutions for eigenvalues. Numerical experiments show the accuracy of theoretical analysis and the property of lower bound.
We present a novel formal system for proving quantitative-leakage properties of programs. Based on a theory of Quantitative Information Flow (QIF) that models information leakage as a noisy communication channel, it uses "gain-functions" for the description and measurement of expected leaks. We use a small imperative programming language, augmented with leakage features, and with it express adversaries' activities in the style of, but more generally than, the Hoare triples or expectation transformers that traditionally express deterministic or probabilistic correctness but without information flow. The programs are annotated with "gain-expressions" that capture simple adversarial settings such as "Guess the secret in one try." but also much more general ones; and our formal syntax and logic -based framework enables us to transform such gain-expressions that apply after a program has finished to ones that equivalently apply before the program has begun. In that way we enable a formal proof-based reasoning system for QIF at the source level. We apply it to the %programming language we have chosen, and demonstrate its effectiveness in a number of small but sometimes intricate situations.
Mobile devices and the Internet of Things (IoT) devices nowadays generate a large amount of heterogeneous spatial-temporal data. It remains a challenging problem to model the spatial-temporal dynamics under privacy concern. Federated learning (FL) has been proposed as a framework to enable model training across distributed devices without sharing original data which reduce privacy concern. Personalized federated learning (PFL) methods further address data heterogenous problem. However, these methods don't consider natural spatial relations among nodes. For the sake of modeling spatial relations, Graph Neural Netowork (GNN) based FL approach have been proposed. But dynamic spatial-temporal relations among edge nodes are not taken into account. Several approaches model spatial-temporal dynamics in a centralized environment, while less effort has been made under federated setting. To overcome these challeges, we propose a novel Federated Adaptive Spatial-Temporal Attention (FedASTA) framework to model the dynamic spatial-temporal relations. On the client node, FedASTA extracts temporal relations and trend patterns from the decomposed terms of original time series. Then, on the server node, FedASTA utilize trend patterns from clients to construct adaptive temporal-spatial aware graph which captures dynamic correlation between clients. Besides, we design a masked spatial attention module with both static graph and constructed adaptive graph to model spatial dependencies among clients. Extensive experiments on five real-world public traffic flow datasets demonstrate that our method achieves state-of-art performance in federated scenario. In addition, the experiments made in centralized setting show the effectiveness of our novel adaptive graph construction approach compared with other popular dynamic spatial-temporal aware methods.