Differential expression (DE) plays a fundamental role toward illuminating the molecular mechanisms driving a difference between groups (e.g., due to treatment or disease). While any analysis is run on particular cells/samples, the intent is to generalize to future occurrences of the treatment or disease. Implicitly, this step is justified by assuming that present and future samples are independent and identically distributed from the same population. Though this assumption is always false, we hope that any deviation from the assumption is small enough that A) conclusions of the analysis still hold and B) standard tools like standard error, significance, and power still reflect generalizability. Conversely, we might worry about these deviations, and reliance on standard tools, if conclusions could be substantively changed by dropping a very small fraction of data. While checking every small fraction is computationally intractable, recent work develops an approximation to identify when such an influential subset exists. Building on this work, we develop a metric for dropping-data robustness of DE; namely, we cast the analysis in a form suitable to the approximation, extend the approximation to models with data-dependent hyperparameters, and extend the notion of a data point from a single cell to a pseudobulk observation. We then overcome the inherent non-differentiability of gene set enrichment analysis to develop an additional approximation for the robustness of top gene sets. We assess robustness of DE for published single-cell RNA-seq data and discover that 1000s of genes can have their results flipped by dropping <1% of the data, including 100s that are sensitive to dropping a single cell (0.07%). Surprisingly, this non-robustness extends to high-level takeaways; half of the top 10 gene sets can be changed by dropping 1-2% of cells, and 2/10 can be changed by dropping a single cell.
The problem of estimating a parameter in the drift coefficient is addressed for $N$ discretely observed independent and identically distributed stochastic differential equations (SDEs). This is done considering additional constraints, wherein only public data can be published and used for inference. The concept of local differential privacy (LDP) is formally introduced for a system of stochastic differential equations. The objective is to estimate the drift parameter by proposing a contrast function based on a pseudo-likelihood approach. A suitably scaled Laplace noise is incorporated to meet the privacy requirements. Our key findings encompass the derivation of explicit conditions tied to the privacy level. Under these conditions, we establish the consistency and asymptotic normality of the associated estimator. Notably, the convergence rate is intricately linked to the privacy level, and is some situations may be completely different from the case where privacy constraints are ignored. Our results hold true as the discretization step approaches zero and the number of processes $N$ tends to infinity.
Speech and language models trained through self-supervised learning (SSL) demonstrate strong alignment with brain activity during speech and language perception. However, given their distinct training modalities, it remains unclear whether they correlate with the same neural aspects. We directly address this question by evaluating the brain prediction performance of two representative SSL models, Wav2Vec2.0 and GPT-2, designed for speech and language tasks. Our findings reveal that both models accurately predict speech responses in the auditory cortex, with a significant correlation between their brain predictions. Notably, shared speech contextual information between Wav2Vec2.0 and GPT-2 accounts for the majority of explained variance in brain activity, surpassing static semantic and lower-level acoustic-phonetic information. These results underscore the convergence of speech contextual representations in SSL models and their alignment with the neural network underlying speech perception, offering valuable insights into both SSL models and the neural basis of speech and language processing.
We show that any one-round algorithm that computes a minimum spanning tree (MST) in the unicast congested clique must use a link bandwidth of $\Omega(\log^3 n)$ bits in the worst case. Consequently, computing an MST under the standard assumption of $O(\log n)$-size messages requires at least $2$ rounds. This is the first round complexity lower bound in the unicast congested clique for a problem where the output size is small, i.e., $O(n\log n)$ bits. Our lower bound holds as long as every edge of the MST is output by an incident node. To the best of our knowledge, all prior lower bounds for the unicast congested clique either considered problems with large output sizes (e.g., triangle enumeration) or required every node to learn the entire output.
The expressivity of Graph Neural Networks (GNNs) can be entirely characterized by appropriate fragments of the first order logic. Namely, any query of the two variable fragment of graded modal logic (GC2) interpreted over labeled graphs can be expressed using a GNN whose size depends only on the depth of the query. As pointed out by [Barcelo & Al., 2020, Grohe, 2021], this description holds for a family of activation functions, leaving the possibibility for a hierarchy of logics expressible by GNNs depending on the chosen activation function. In this article, we show that such hierarchy indeed exists by proving that GC2 queries cannot be expressed by GNNs with polynomial activation functions. This implies a separation between polynomial and popular non polynomial activations (such as Rectified Linear Units) and answers an open question formulated by [Grohe, 21].
Nearest-neighbor methods have become popular in statistics and play a key role in statistical learning. Important decisions in nearest-neighbor methods concern the variables to use (when many potential candidates exist) and how to measure the dissimilarity between units. The first decision depends on the scope of the application while second depends mainly on the type of variables. Unfortunately, relatively few options permit to handle mixed-type variables, a situation frequently encountered in practical applications. The most popular dissimilarity for mixed-type variables is derived as the complement to one of the Gower's similarity coefficient. It is appealing because ranges between 0 and 1, being an average of the scaled dissimilarities calculated variable by variable, handles missing values and allows for a user-defined weighting scheme when averaging dissimilarities. The discussion on the weighting schemes is sometimes misleading since it often ignores that the unweighted "standard" setting hides an unbalanced contribution of the single variables to the overall dissimilarity. We address this drawback following the recent idea of introducing a weighting scheme that minimizes the differences in the correlation between each contributing dissimilarity and the resulting weighted Gower's dissimilarity. In particular, this note proposes different approaches for measuring the correlation depending on the type of variables. The performances of the proposed approaches are evaluated in simulation studies related to classification and imputation of missing values.
This paper is focused on the study of entropic regularization in optimal transport as a smoothing method for Wasserstein estimators, through the prism of the classical tradeoff between approximation and estimation errors in statistics. Wasserstein estimators are defined as solutions of variational problems whose objective function involves the use of an optimal transport cost between probability measures. Such estimators can be regularized by replacing the optimal transport cost by its regularized version using an entropy penalty on the transport plan. The use of such a regularization has a potentially significant smoothing effect on the resulting estimators. In this work, we investigate its potential benefits on the approximation and estimation properties of regularized Wasserstein estimators. Our main contribution is to discuss how entropic regularization may reach, at a lower computational cost, statistical performances that are comparable to those of un-regularized Wasserstein estimators in statistical learning problems involving distributional data analysis. To this end, we present new theoretical results on the convergence of regularized Wasserstein estimators. We also study their numerical performances using simulated and real data in the supervised learning problem of proportions estimation in mixture models using optimal transport.
We present an overview of recent developments on the convergence analysis of numerical methods for inviscid multidimensional compressible flows that preserve underlying physical structures. We introduce the concept of generalized solutions, the so-called dissipative solutions, and explain their relationship to other commonly used solution concepts. In numerical experiments we apply K-convergence of numerical solutions and approximate turbulent solutions together with the Reynolds stress defect and the energy defect.
LiNGAM determines the variable order from cause to effect using additive noise models, but it faces challenges with confounding. Previous methods maintained LiNGAM's fundamental structure while trying to identify and address variables affected by confounding. As a result, these methods required significant computational resources regardless of the presence of confounding, and they did not ensure the detection of all confounding types. In contrast, this paper enhances LiNGAM by introducing LiNGAM-MMI, a method that quantifies the magnitude of confounding using KL divergence and arranges the variables to minimize its impact. This method efficiently achieves a globally optimal variable order through the shortest path problem formulation. LiNGAM-MMI processes data as efficiently as traditional LiNGAM in scenarios without confounding while effectively addressing confounding situations. Our experimental results suggest that LiNGAM-MMI more accurately determines the correct variable order, both in the presence and absence of confounding.
A stepped wedge design is a unidirectional crossover design where clusters are randomized to distinct treatment sequences defined by calendar time. While model-based analysis of stepped wedge designs -- via linear mixed models or generalized estimating equations -- is standard practice to evaluate treatment effects accounting for clustering and adjusting for baseline covariates, formal results on their model-robustness properties remain unavailable. In this article, we study when a potentially misspecified multilevel model can offer consistent estimators for treatment effect estimands that are functions of calendar time and/or exposure time. We describe a super-population potential outcomes framework to define treatment effect estimands of interest in stepped wedge designs, and adapt linear mixed models and generalized estimating equations to achieve estimand-aligned inference. We prove a central result that, as long as the treatment effect structure is correctly specified in each working model, our treatment effect estimator is robust to arbitrary misspecification of all remaining model components. The theoretical results are illustrated via simulation experiments and re-analysis of a cardiovascular stepped wedge cluster randomized trial.
A principal hires an agent to work on a long-term project that culminates in a breakthrough or a breakdown. At each time, the agent privately chooses to work or shirk. Working increases the arrival rate of breakthroughs and decreases the arrival rate of breakdowns. To motivate the agent to work, the principal conducts costly inspections. She fires the agent if shirking is detected. We characterize the principal's optimal inspection policy. Periodic inspections are optimal if work primarily generates breakthroughs. Random inspections are optimal if work primarily prevents breakdowns. Crucially, the agent's actions determine his risk attitude over the timing of punishments.