Efficiently counting or detecting defective items is a crucial task in various fields ranging from biological testing to quality control to streaming algorithms. The \emph{group testing estimation problem} concerns estimating the number of defective elements $d$ in a collection of $n$ total within a given factor. We primarily consider the classical query model, in which a query reveals whether the selected group of elements contains a defective one. We show that any non-adaptive randomized algorithm that estimates the value of $d$ within a constant factor requires $\Omega(\log n)$ queries. This confirms that a known $O(\log n)$ upper bound by Bshouty (2019) is tight and resolves a conjecture by Damaschke and Sheikh Muhammad (2010). Additionally, we prove similar matching upper and lower bounds in the threshold query model.
We introduce the higher-order refactoring problem, where the goal is to compress a logic program by discovering higher-order abstractions, such as map, filter, and fold. We implement our approach in Stevie, which formulates the refactoring problem as a constraint optimisation problem. Our experiments on multiple domains, including program synthesis and visual reasoning, show that refactoring can improve the learning performance of an inductive logic programming system, specifically improving predictive accuracies by 27% and reducing learning times by 47%. We also show that Stevie can discover abstractions that transfer to multiple domains.
Physiological fatigue, a state of reduced cognitive and physical performance resulting from prolonged mental or physical exertion, poses significant challenges in various domains, including healthcare, aviation, transportation, and industrial sectors. As the understanding of fatigue's impact on human performance grows, there is a growing interest in developing effective fatigue monitoring techniques. Among these techniques, electroencephalography (EEG) has emerged as a promising tool for objectively assessing physiological fatigue due to its non-invasiveness, high temporal resolution, and sensitivity to neural activity. This paper aims to provide a comprehensive analysis of the current state of the use of EEG for monitoring physiological fatigue.
Mixtures of regression are a powerful class of models for regression learning with respect to a highly uncertain and heterogeneous response variable of interest. In addition to being a rich predictive model for the response given some covariates, the parameters in this model class provide useful information about the heterogeneity in the data population, which is represented by the conditional distributions for the response given the covariates associated with a number of distinct but latent subpopulations. In this paper, we investigate conditions of strong identifiability, rates of convergence for conditional density and parameter estimation, and the Bayesian posterior contraction behavior arising in finite mixture of regression models, under exact-fitted and over-fitted settings and when the number of components is unknown. This theory is applicable to common choices of link functions and families of conditional distributions employed by practitioners. We provide simulation studies and data illustrations, which shed some light on the parameter learning behavior found in several popular regression mixture models reported in the literature.
Missing data is a common problem in practical settings. Various imputation methods have been developed to deal with missing data. However, even though the label is usually available in the training data, the common practice of imputation usually only relies on the input and ignores the label. In this work, we illustrate how stacking the label into the input can significantly improve the imputation of the input. In addition, we propose a classification strategy that initializes the predicted test label with missing values and stacks the label with the input for imputation. This allows imputing the label and the input at the same time. Also, the technique is capable of handling data training with missing labels without any prior imputation and is applicable to continuous, categorical, or mixed-type data. Experiments show promising results in terms of accuracy.
High-order implicit shock tracking (fitting) is a class of high-order, optimization-based numerical methods to approximate solutions of conservation laws with non-smooth features by aligning elements of the computational mesh with non-smooth features. This ensures the non-smooth features are perfectly represented by inter-element jumps and high-order basis functions approximate smooth regions of the solution without nonlinear stabilization, which leads to accurate approximations on traditionally coarse meshes. In this work, we extend implicit shock tracking to time-dependent problems using a slab-based space-time approach. This is achieved by reformulating a time-dependent conservation law as a steady conservation law in one higher dimension and applying existing implicit shock tracking techniques. To avoid computations over the entire time domain and unstructured mesh generation in higher dimensions, we introduce a general procedure to generate conforming, simplex-only meshes of space-time slabs in such a way that preserves features (e.g., curved elements, refinement regions) from previous time slabs. The use of space-time slabs also simplifies the shock tracking problem by reducing temporal complexity. Several practical adaptations of the implicit shock tracking solvers are developed for the space-time setting including 1) a self-adjusting temporal boundary, 2) nondimensionalization of a space-time slab, 3) adaptive mesh refinement, and 4) shock boundary conditions, which lead to accurate solutions on coarse space-time grids, even for problem with complex flow features such as curved shocks, shock formation, shock-shock and shock-boundary interaction, and triple points.
Any interactive protocol between a pair of parties can be reliably simulated in the presence of noise with a multiplicative overhead on the number of rounds (Schulman 1996). The reciprocal of the best (least) overhead is called the interactive capacity of the noisy channel. In this work, we present lower bounds on the interactive capacity of the binary erasure channel. Our lower bound improves the best known bound due to Ben-Yishai et al. 2021 by roughly a factor of 1.75. The improvement is due to a tighter analysis of the correctness of the simulation protocol using error pattern analysis. More precisely, instead of using the well-known technique of bounding the least number of erasures needed to make the simulation fail, we identify and bound the probability of specific erasure patterns causing simulation failure. We remark that error pattern analysis can be useful in solving other problems involving stochastic noise, such as bounding the interactive capacity of different channels.
While methods for measuring and correcting differential performance in risk prediction models have proliferated in recent years, most existing techniques can only be used to assess fairness across relatively large subgroups. The purpose of algorithmic fairness efforts is often to redress discrimination against groups that are both marginalized and small, so this sample size limitation often prevents existing techniques from accomplishing their main aim. We take a three-pronged approach to address the problem of quantifying fairness with small subgroups. First, we propose new estimands built on the "counterfactual fairness" framework that leverage information across groups. Second, we estimate these quantities using a larger volume of data than existing techniques. Finally, we propose a novel data borrowing approach to incorporate "external data" that lacks outcomes and predictions but contains covariate and group membership information. This less stringent requirement on the external data allows for more possibilities for external data sources. We demonstrate practical application of our estimators to a risk prediction model used by a major Midwestern health system during the COVID-19 pandemic.
The notion of an e-value has been recently proposed as a possible alternative to critical regions and p-values in statistical hypothesis testing. In this paper we consider testing the nonparametric hypothesis of symmetry, introduce analogues for e-values of three popular nonparametric tests, define an analogue for e-values of Pitman's asymptotic relative efficiency, and apply it to the three nonparametric tests. We discuss limitations of our simple definition of asymptotic relative efficiency and list directions of further research.
This paper aims to front with dimensionality reduction in regression setting when the predictors are a mixture of functional variable and high-dimensional vector. A flexible model, combining both sparse linear ideas together with semiparametrics, is proposed. A wide scope of asymptotic results is provided: this covers as well rates of convergence of the estimators as asymptotic behaviour of the variable selection procedure. Practical issues are analysed through finite sample simulated experiments while an application to Tecator's data illustrates the usefulness of our methodology.
We develop a novel multiple hypothesis testing correction with family-wise error rate (FWER) control that efficiently exploits positive dependencies between potentially correlated statistical hypothesis tests. Our proposed algorithm $\texttt{max-rank}$ is conceptually straight-forward, relying on the use of a $\max$-operator in the rank domain of computed test statistics. We compare our approach to the frequently employed Bonferroni correction, theoretically and empirically demonstrating its superiority over Bonferroni in the case of existing positive dependency, and its equivalence otherwise. Our advantage over Bonferroni increases as the number of tests rises, and we maintain high statistical power whilst ensuring FWER control. We specifically frame our algorithm in the context of parallel permutation testing, a scenario that arises in our primary application of conformal prediction, a recently popularized approach for quantifying uncertainty in complex predictive settings.