We study the problem of online unweighted bipartite matching with $n$ offline vertices and $n$ online vertices where one wishes to be competitive against the optimal offline algorithm. While the classic RANKING algorithm of Karp et al. [1990] provably attains competitive ratio of $1-1/e > 1/2$, we show that no learning-augmented method can be both 1-consistent and strictly better than $1/2$-robust under the adversarial arrival model. Meanwhile, under the random arrival model, we show how one can utilize methods from distribution testing to design an algorithm that takes in external advice about the online vertices and provably achieves competitive ratio interpolating between any ratio attainable by advice-free methods and the optimal ratio of 1, depending on the advice quality.
We introduce the broad subclass of algebraic compressed sensing problems, where structured signals are modeled either explicitly or implicitly via polynomials. This includes, for instance, low-rank matrix and tensor recovery. We employ powerful techniques from algebraic geometry to study well-posedness of sufficiently general compressed sensing problems, including existence, local recoverability, global uniqueness, and local smoothness. Our main results are summarized in thirteen questions and answers in algebraic compressed sensing. Most of our answers concerning the minimum number of required measurements for existence, recoverability, and uniqueness of algebraic compressed sensing problems are optimal and depend only on the dimension of the model.
Static analysis of logic programs by abstract interpretation requires designing abstract operators which mimic the concrete ones, such as unification, renaming and projection. In the case of goal-driven analysis, where goal-dependent semantics are used, we also need a backward-unification operator, typically implemented through matching. In this paper we study the problem of deriving optimal abstract matching operators for sharing and linearity properties. We provide an optimal operator for matching in the domain ${\mathtt{ShLin}^{\omega}}$, which can be easily instantiated to derive optimal operators for the domains ${\mathtt{ShLin}^{2}}$ by Andy King and the reduced product $\mathtt{Sharing} \times \mathtt{Lin}$.
Estimating parameters of a diffusion process given continuous-time observations of the process via maximum likelihood approaches or, online, via stochastic gradient descent or Kalman filter formulations constitutes a well-established research area. It has also been established previously that these techniques are, in general, not robust to perturbations in the data in the form of temporal correlations. While the subject is relatively well understood and appropriate modifications have been suggested in the context of multi-scale diffusion processes and their reduced model equations, we consider here an alternative setting where a second-order diffusion process in positions and velocities is only observed via its positions. In this note, we propose a simple modification to standard stochastic gradient descent and Kalman filter formulations, which eliminates the arising systematic estimation biases. The modification can be extended to standard maximum likelihood approaches and avoids computation of previously proposed correction terms.
This paper establishes the generalization error of pooled min-$\ell_2$-norm interpolation in transfer learning where data from diverse distributions are available. Min-norm interpolators emerge naturally as implicit regularized limits of modern machine learning algorithms. Previous work characterized their out-of-distribution risk when samples from the test distribution are unavailable during training. However, in many applications, a limited amount of test data may be available during training, yet properties of min-norm interpolation in this setting are not well-understood. We address this gap by characterizing the bias and variance of pooled min-$\ell_2$-norm interpolation under covariate and model shifts. The pooled interpolator captures both early fusion and a form of intermediate fusion. Our results have several implications: under model shift, for low signal-to-noise ratio (SNR), adding data always hurts. For higher SNR, transfer learning helps as long as the shift-to-signal (SSR) ratio lies below a threshold that we characterize explicitly. By consistently estimating these ratios, we provide a data-driven method to determine: (i) when the pooled interpolator outperforms the target-based interpolator, and (ii) the optimal number of target samples that minimizes the generalization error. Under covariate shift, if the source sample size is small relative to the dimension, heterogeneity between between domains improves the risk, and vice versa. We establish a novel anisotropic local law to achieve these characterizations, which may be of independent interest in random matrix theory. We supplement our theoretical characterizations with comprehensive simulations that demonstrate the finite-sample efficacy of our results.
We propose a scalable variational Bayes method for statistical inference for a single or low-dimensional subset of the coordinates of a high-dimensional parameter in sparse linear regression. Our approach relies on assigning a mean-field approximation to the nuisance coordinates and carefully modelling the conditional distribution of the target given the nuisance. This requires only a preprocessing step and preserves the computational advantages of mean-field variational Bayes, while ensuring accurate and reliable inference for the target parameter, including for uncertainty quantification. We investigate the numerical performance of our algorithm, showing that it performs competitively with existing methods. We further establish accompanying theoretical guarantees for estimation and uncertainty quantification in the form of a Bernstein--von Mises theorem.
We extend the laminate based framework of direct Deep Material Networks (DMNs) to treat suspensions of rigid fibers in a non-Newtonian solvent. To do so, we derive two-phase homogenization blocks that are capable of treating incompressible fluid phases and infinite material contrast. In particular, we leverage existing results for linear elastic laminates to identify closed form expressions for the linear homogenization functions of two-phase layered emulsions. To treat infinite material contrast, we rely on the repeated layering of two-phase layered emulsions in the form of coated layered materials. We derive necessary and sufficient conditions which ensure that the effective properties of coated layered materials with incompressible phases are non-singular, even if one of the phases is rigid. With the derived homogenization blocks and non-singularity conditions at hand, we present a novel DMN architecture, which we name the Flexible DMN (FDMN) architecture. We build and train FDMNs to predict the effective stress response of shear-thinning fiber suspensions with a Cross-type matrix material. For 31 fiber orientation states, six load cases, and over a wide range of shear rates relevant to engineering processes, the FDMNs achieve validation errors below 4.31% when compared to direct numerical simulations with Fast-Fourier-Transform based computational techniques. Compared to a conventional machine learning approach introduced previously by the consortium of authors, FDMNs offer better accuracy at an increased computational cost for the considered material and flow scenarios.
In this paper we study predictive mean matching mass imputation estimators to integrate data from probability and non-probability samples. We consider two approaches: matching predicted to predicted ($\hat{y}-\hat{y}$~matching; PMM A) and predicted to observed ($\hat{y}-y$~matching; PMM B) values. We prove the consistency of two semi-parametric mass imputation estimators based on these approaches and derive their variance and estimators of variance. We underline the differences of our approach with the nearest neighbour approach proposed by Yang et al. (2021) and prove consistency of the PMM A estimator under model mis-specification. Our approach can be employed with non-parametric regression techniques, such as kernel regression, and the analytical expression for variance can also be applied in nearest neighbour matching for non-probability samples. We conduct extensive simulation studies in order to compare the properties of this estimator with existing approaches, discuss the selection of $k$-nearest neighbours, and study the effects of model mis-specification. The paper finishes with empirical study in integration of job vacancy survey and vacancies submitted to public employment offices (admin and online data). Open source software is available for the proposed approaches.
The statistical modeling of discrete extremes has received less attention than their continuous counterparts in the Extreme Value Theory (EVT) literature. One approach to the transition from continuous to discrete extremes is the modeling of threshold exceedances of integer random variables by the discrete version of the generalized Pareto distribution. However, the optimal choice of thresholds defining exceedances remains a problematic issue. Moreover, in a regression framework, the treatment of the majority of non-extreme data below the selected threshold is either ignored or separated from the extremes. To tackle these issues, we expand on the concept of employing a smooth transition between the bulk and the upper tail of the distribution. In the case of zero inflation, we also develop models with an additional parameter. To incorporate possible predictors, we relate the parameters to additive smoothed predictors via an appropriate link, as in the generalized additive model (GAM) framework. A penalized maximum likelihood estimation procedure is implemented. We illustrate our modeling proposal with a real dataset of avalanche activity in the French Alps. With the advantage of bypassing the threshold selection step, our results indicate that the proposed models are more flexible and robust than competing models, such as the negative binomial distribution
For the fractional Laplacian of variable order, an efficient and accurate numerical evaluation in multi-dimension is a challenge for the nature of a singular integral. We propose a simple and easy-to-implement finite difference scheme for the multi-dimensional variable-order fractional Laplacian defined by a hypersingular integral. We prove that the scheme is of second-order convergence and apply the developed finite difference scheme to solve various equations with the variable-order fractional Laplacian. We present a fast solver with quasi-linear complexity of the scheme for computing variable-order fractional Laplacian and corresponding PDEs. Several numerical examples demonstrate the accuracy and efficiency of our algorithm and verify our theory.
In this paper, we investigate the module-checking problem of pushdown multi-agent systems (PMS) against ATL and ATL* specifications. We establish that for ATL, module checking of PMS is 2EXPTIME-complete, which is the same complexity as pushdown module-checking for CTL. On the other hand, we show that ATL* module-checking of PMS turns out to be 4EXPTIME-complete, hence exponentially harder than both CTL* pushdown module-checking and ATL* model-checking of PMS. Our result for ATL* provides a rare example of a natural decision problem that is elementary yet but with a complexity that is higher than triply exponential-time.