Efficiently counting or detecting defective items is a crucial task in various fields ranging from biological testing to quality control to streaming algorithms. The group testing estimation problem concerns estimating the number of defective elements $d$ in a collection of $n$ total within a fixed factor. We primarily consider the classical query model, in which a query reveals whether the selected group of elements contains a defective one. We show that any non-adaptive randomized algorithm that estimates the value of $d$ within a constant factor requires $\Omega(\log n)$ queries. This confirms that a known $O(\log n)$ upper bound by Bshouty (2019) is tight and resolves a conjecture by Damaschke and Sheikh Muhammad (2010). Additionally, we prove a similar lower bound in the threshold query model.
Stress prediction in porous materials and structures is challenging due to the high computational cost associated with direct numerical simulations. Convolutional Neural Network (CNN) based architectures have recently been proposed as surrogates to approximate and extrapolate the solution of such multiscale simulations. These methodologies are usually limited to 2D problems due to the high computational cost of 3D voxel based CNNs. We propose a novel geometric learning approach based on a Graph Neural Network (GNN) that efficiently deals with three-dimensional problems by performing convolutions over 2D surfaces only. Following our previous developments using pixel-based CNN, we train the GNN to automatically add local fine-scale stress corrections to an inexpensively computed coarse stress prediction in the porous structure of interest. Our method is Bayesian and generates densities of stress fields, from which credible intervals may be extracted. As a second scientific contribution, we propose to improve the extrapolation ability of our network by deploying a strategy of online physics-based corrections. Specifically, we condition the posterior predictions of our probabilistic predictions to satisfy partial equilibrium at the microscale, at the inference stage. This is done using an Ensemble Kalman algorithm, to ensure tractability of the Bayesian conditioning operation. We show that this innovative methodology allows us to alleviate the effect of undesirable biases observed in the outputs of the uncorrected GNN, and improves the accuracy of the predictions in general.
We establish conditions under which latent causal graphs are nonparametrically identifiable and can be reconstructed from unknown interventions in the latent space. Our primary focus is the identification of the latent structure in measurement models without parametric assumptions such as linearity or Gaussianity. Moreover, we do not assume the number of hidden variables is known, and we show that at most one unknown intervention per hidden variable is needed. This extends a recent line of work on learning causal representations from observations and interventions. The proofs are constructive and introduce two new graphical concepts -- imaginary subsets and isolated edges -- that may be useful in their own right. As a matter of independent interest, the proofs also involve a novel characterization of the limits of edge orientations within the equivalence class of DAGs induced by unknown interventions. These are the first results to characterize the conditions under which causal representations are identifiable without making any parametric assumptions in a general setting with unknown interventions and without faithfulness.
This paper investigates the multiple testing problem for high-dimensional sparse binary sequences, motivated by the crowdsourcing problem in machine learning. We study the empirical Bayes approach for multiple testing on the high-dimensional Bernoulli model with a conjugate spike and uniform slab prior. We first show that the hard thresholding rule deduced from the posterior distribution is suboptimal. Consequently, the $\ell$-value procedure constructed using this posterior tends to be overly conservative in estimating the false discovery rate (FDR). We then propose two new procedures based on $\adj\ell$-values and $q$-values to correct this issue. Sharp frequentist theoretical results are obtained, demonstrating that both procedures can effectively control the FDR under sparsity. Numerical experiments are conducted to validate our theory in finite samples. To our best knowledge, this work provides the first uniform FDR control result in multiple testing for high-dimensional sparse binary data.
In prediction settings where data are collected over time, it is often of interest to understand both the importance of variables for predicting the response at each time point and the importance summarized over the time series. Building on recent advances in estimation and inference for variable importance measures, we define summaries of variable importance trajectories. These measures can be estimated and the same approaches for inference can be applied regardless of the choice of the algorithm(s) used to estimate the prediction function. We propose a nonparametric efficient estimation and inference procedure as well as a null hypothesis testing procedure that are valid even when complex machine learning tools are used for prediction. Through simulations, we demonstrate that our proposed procedures have good operating characteristics, and we illustrate their use by investigating the longitudinal importance of risk factors for suicide attempt.
We consider the estimation of the cumulative hazard function, and equivalently the distribution function, with censored data under a setup that preserves the privacy of the survival database. This is done through a $\alpha$-locally differentially private mechanism for the failure indicators and by proposing a non-parametric kernel estimator for the cumulative hazard function that remains consistent under the privatization. Under mild conditions, we also prove lowers bounds for the minimax rates of convergence and show that estimator is minimax optimal under a well-chosen bandwidth.
Linear systems with a tensor product structure arise naturally when considering the discretization of Laplace type differential equations or, more generally, multidimensional operators with separable coefficients. In this work, we focus on the numerical solution of linear systems of the form $$ \left(I\otimes \dots\otimes I \otimes A_1+\dots + A_d\otimes I \otimes\dots \otimes I\right)x=b,$$ where the matrices $A_t\in\mathbb R^{n\times n}$ are symmetric positive definite and belong to the class of hierarchically semiseparable matrices. We propose and analyze a nested divide-and-conquer scheme, based on the technology of low-rank updates, that attains the quasi-optimal computational cost $\mathcal O(n^d (\log(n) + \log(\kappa)^2 + \log(\kappa) \log(\epsilon^{-1})))$ where $\kappa$ is the condition number of the linear system, and $\epsilon$ the target accuracy. Our theoretical analysis highlights the role of inexactness in the nested calls of our algorithm and provides worst case estimates for the amplification of the residual norm. The performances are validated on 2D and 3D case studies.
In this work, a new class of vector-valued phase field models is presented, where the values of the phase parameters are constrained by a convex set. The generated phase fields feature the partition of the domain into patches of distinct phases, separated by thin interfaces. The configuration and dynamics of the phases are directly dependent on the geometry and topology of the convex constraint set, which makes it possible to engineer models of this type that exhibit desired interactions and patterns. An efficient proximal gradient solver is introduced to study numerically their L2-gradient flow, i.e.~the associated Allen-Cahn-type equation. Applying the solver together with various choices for the convex constraint set, yields numerical results that feature a number of patterns observed in nature and engineering, such as multiphase grains in metal alloys, traveling waves in reaction-diffusion systems, and vortices in magnetic materials.
In multiple testing several criteria to control for type I errors exist. The false discovery rate, which evaluates the expected proportion of false discoveries among the rejected null hypotheses, has become the standard approach in this setting. However, false discovery rate control may be too conservative when the effects are weak. In this paper we alternatively propose to control the number of significant effects, where 'significant' refers to a pre-specified threshold $\gamma$. This means that a $(1-\alpha)$-lower confidence bound $L$ for the number of non-true null hypothesis with p-values below $\gamma$ is provided. When one rejects the nulls corresponding to the $L$ smallest p-values, the probability that the number of false positives exceeds the number of false negatives among the significant effects is bounded by $\alpha$. Relative merits of the proposed criterion are discussed. Procedures to control for the number of significant effects in practice are introduced and investigated both theoretically and through simulations. Illustrative real data applications are given.
Stochastic filtering is a vibrant area of research in both control theory and statistics, with broad applications in many scientific fields. Despite its extensive historical development, there still lacks an effective method for joint parameter-state estimation in SDEs. The state-of-the-art particle filtering methods suffer from either sample degeneracy or information loss, with both issues stemming from the dynamics of the particles generated to represent system parameters. This paper provides a novel and effective approach for joint parameter-state estimation in SDEs via Rao-Blackwellization and modularization. Our method operates in two layers: the first layer estimates the system states using a bootstrap particle filter, and the second layer marginalizes out system parameters explicitly. This strategy circumvents the need to generate particles representing system parameters, thereby mitigating their associated problems of sample degeneracy and information loss. Moreover, our method employs a modularization approach when integrating out the parameters, which significantly reduces the computational complexity. All these designs ensure the superior performance of our method. Finally, a numerical example is presented to illustrate that our method outperforms existing approaches by a large margin.
The remarkable practical success of deep learning has revealed some major surprises from a theoretical perspective. In particular, simple gradient methods easily find near-optimal solutions to non-convex optimization problems, and despite giving a near-perfect fit to training data without any explicit effort to control model complexity, these methods exhibit excellent predictive accuracy. We conjecture that specific principles underlie these phenomena: that overparametrization allows gradient methods to find interpolating solutions, that these methods implicitly impose regularization, and that overparametrization leads to benign overfitting. We survey recent theoretical progress that provides examples illustrating these principles in simpler settings. We first review classical uniform convergence results and why they fall short of explaining aspects of the behavior of deep learning methods. We give examples of implicit regularization in simple settings, where gradient methods lead to minimal norm functions that perfectly fit the training data. Then we review prediction methods that exhibit benign overfitting, focusing on regression problems with quadratic loss. For these methods, we can decompose the prediction rule into a simple component that is useful for prediction and a spiky component that is useful for overfitting but, in a favorable setting, does not harm prediction accuracy. We focus specifically on the linear regime for neural networks, where the network can be approximated by a linear model. In this regime, we demonstrate the success of gradient flow, and we consider benign overfitting with two-layer networks, giving an exact asymptotic analysis that precisely demonstrates the impact of overparametrization. We conclude by highlighting the key challenges that arise in extending these insights to realistic deep learning settings.