We consider covariance estimation of any subgaussian distribution from finitely many i.i.d. samples that are quantized to one bit of information per entry. Recent work has shown that a reliable estimator can be constructed if uniformly distributed dithers on $[-\lambda,\lambda]$ are used in the one-bit quantizer. This estimator enjoys near-minimax optimal, non-asymptotic error estimates in the operator and Frobenius norms if $\lambda$ is chosen proportional to the largest variance of the distribution. However, this quantity is not known a-priori, and in practice $\lambda$ needs to be carefully tuned to achieve good performance. In this work we resolve this problem by introducing a tuning-free variant of this estimator, which replaces $\lambda$ by a data-driven quantity. We prove that this estimator satisfies the same non-asymptotic error estimates - up to small (logarithmic) losses and a slightly worse probability estimate. Our proof relies on a new version of the Burkholder-Rosenthal inequalities for matrix martingales, which is expected to be of independent interest.
We present a novel computational model for the dynamics of alveolar recruitment/derecruitment (RD), which reproduces the underlying characteristics typically observed in injured lungs. The basic idea is a pressure- and time-dependent variation of the stress-free reference volume in reduced dimensional viscoelastic elements representing the acinar tissue. We choose a variable reference volume triggered by critical opening and closing pressures in a time-dependent manner from a straightforward mechanical point of view. In the case of (partially and progressively) collapsing alveolar structures, the volume available for expansion during breathing reduces and vice versa, eventually enabling consideration of alveolar collapse and reopening in our model. We further introduce a method for patient-specific determination of the underlying critical parameters of the new alveolar RD dynamics when integrated into the tissue elements, referred to as terminal units, of a spatially resolved physics-based lung model that simulates the human respiratory system in an anatomically correct manner. Relevant patient-specific parameters of the terminal units are herein determined based on medical image data and the macromechanical behavior of the lung during artificial ventilation. We test the whole modeling approach for a real-life scenario by applying it to the clinical data of a mechanically ventilated patient. The generated lung model is capable of reproducing clinical measurements such as tidal volume and pleural pressure during various ventilation maneuvers. We conclude that this new model is an important step toward personalized treatment of ARDS patients by considering potentially harmful mechanisms - such as cyclic RD and overdistension - and might help in the development of relevant protective ventilation strategies to reduce ventilator-induced lung injury (VILI).
The notion that algorithmic systems should be "transparent" and "explainable" is common in the many statements of consensus principles developed by governments, companies, and advocacy organizations. But what exactly do policy and legal actors want from these technical concepts, and how do their desiderata compare with the explainability techniques developed in the machine learning literature? In hopes of better connecting the policy and technical communities, we provide case studies illustrating five ways in which algorithmic transparency and explainability have been used in policy settings: specific requirements for explanations; in nonbinding guidelines for internal governance of algorithms; in regulations applicable to highly regulated settings; in guidelines meant to increase the utility of legal liability for algorithms; and broad requirements for model and data transparency. The case studies span a spectrum from precise requirements for specific types of explanations to nonspecific requirements focused on broader notions of transparency, illustrating the diverse needs, constraints, and capacities of various policy actors and contexts. Drawing on these case studies, we discuss promising ways in which transparency and explanation could be used in policy, as well as common factors limiting policymakers' use of algorithmic explainability. We conclude with recommendations for researchers and policymakers.
The family of multivariate skew-normal distributions has many interesting properties. It is shown here that these hold for a general class of skew-elliptical distributions. For this class, several stochastic representations are established and then their probabilistic properties, such as characteristic function, moments, quadratic forms as well as transformation properties, are investigated.
The rise of ideological divides in public discourse has received considerable attention in recent years. However, much of this research has been concentrated on Western democratic nations, leaving other regions largely unexplored. Here, we delve into the political landscape of Pakistan, a nation marked by intricate political dynamics and persistent turbulence. Spanning from 2018 to 2022, our analysis of Twitter data allows us to capture pivotal shifts and developments in Pakistan's political arena. By examining interactions and content generated by politicians affiliated with major political parties, we reveal a consistent and active presence of politicians on Twitter, with opposition parties exhibiting particularly robust engagement. We explore the alignment of party audiences, highlighting a notable convergence among opposition factions over time. Our analysis also uncovers significant shifts in political affiliations, including the transition of politicians to the opposition alliance. Quantitatively, we assess evolving interaction patterns, showcasing the prevalence of homophilic connections while identifying a growing interconnection among audiences of opposition parties. Our study, by accurately reflecting shifts in the political landscape, underscores the reliability of our methodology and social media data as a valuable tool for monitoring political polarization and providing a nuanced understanding of macro-level trends and individual-level transformations.
Given an observational study with $n$ independent but heterogeneous units, our goal is to learn the counterfactual distribution for each unit using only one $p$-dimensional sample per unit containing covariates, interventions, and outcomes. Specifically, we allow for unobserved confounding that introduces statistical biases between interventions and outcomes as well as exacerbates the heterogeneity across units. Modeling the conditional distribution of the outcomes as an exponential family, we reduce learning the unit-level counterfactual distributions to learning $n$ exponential family distributions with heterogeneous parameters and only one sample per distribution. We introduce a convex objective that pools all $n$ samples to jointly learn all $n$ parameter vectors, and provide a unit-wise mean squared error bound that scales linearly with the metric entropy of the parameter space. For example, when the parameters are $s$-sparse linear combination of $k$ known vectors, the error is $O(s\log k/p)$. En route, we derive sufficient conditions for compactly supported distributions to satisfy the logarithmic Sobolev inequality. As an application of the framework, our results enable consistent imputation of sparsely missing covariates.
While text-conditional 3D object generation and manipulation have seen rapid progress, the evaluation of coherence between generated 3D shapes and input textual descriptions lacks a clear benchmark. The reason is twofold: a) the low quality of the textual descriptions in the only publicly available dataset of text-shape pairs; b) the limited effectiveness of the metrics used to quantitatively assess such coherence. In this paper, we propose a comprehensive solution that addresses both weaknesses. Firstly, we employ large language models to automatically refine textual descriptions associated with shapes. Secondly, we propose a quantitative metric to assess text-to-shape coherence, through cross-attention mechanisms. To validate our approach, we conduct a user study and compare quantitatively our metric with existing ones. The refined dataset, the new metric and a set of text-shape pairs validated by the user study comprise a novel, fine-grained benchmark that we publicly release to foster research on text-to-shape coherence of text-conditioned 3D generative models. Benchmark available at //cvlab-unibo.github.io/CrossCoherence-Web/.
Model selection aims to identify a sufficiently well performing model that is possibly simpler than the most complex model among a pool of candidates. However, the decision-making process itself can inadvertently introduce non-negligible bias when the cross-validation estimates of predictive performance are marred by excessive noise. In finite data regimes, cross-validated estimates can encourage the statistician to select one model over another when it is not actually better for future data. While this bias remains negligible in the case of few models, when the pool of candidates grows, and model selection decisions are compounded (as in forward search), the expected magnitude of selection-induced bias is likely to grow too. This paper introduces an efficient approach to estimate and correct selection-induced bias based on order statistics. Numerical experiments demonstrate the reliability of our approach in estimating both selection-induced bias and over-fitting along compounded model selection decisions, with specific application to forward search. This work represents a light-weight alternative to more computationally expensive approaches to correcting selection-induced bias, such as nested cross-validation and the bootstrap. Our approach rests on several theoretic assumptions, and we provide a diagnostic to help understand when these may not be valid and when to fall back on safer, albeit more computationally expensive approaches. The accompanying code facilitates its practical implementation and fosters further exploration in this area.
Unveiling the underlying governing equations of nonlinear dynamic systems remains a significant challenge, especially when encountering noisy observations and no prior knowledge available. This study proposes R-DISCOVER, a framework designed to robustly uncover open-form partial differential equations (PDEs) from limited and noisy data. The framework operates through two alternating update processes: discovering and embedding. The discovering phase employs symbolic representation and a reinforcement learning (RL)-guided hybrid PDE generator to efficiently produce diverse open-form PDEs with tree structures. A neural network-based predictive model fits the system response and serves as the reward evaluator for the generated PDEs. PDEs with superior fits are utilized to iteratively optimize the generator via the RL method and the best-performing PDE is selected by a parameter-free stability metric. The embedding phase integrates the initially identified PDE from the discovering process as a physical constraint into the predictive model for robust training. The traversal of PDE trees automates the construction of the computational graph and the embedding process without human intervention. Numerical experiments demonstrate our framework's capability to uncover governing equations from nonlinear dynamic systems with limited and highly noisy data and outperform other physics-informed neural network-based discovery methods. This work opens new potential for exploring real-world systems with limited understanding.
The autologistic actor attribute model, or ALAAM, is the social influence counterpart of the better-known exponential-family random graph model (ERGM) for social selection. Extensive experience with ERGMs has shown that the problem of near-degeneracy which often occurs with simple models can be overcome by using "geometrically weighted" or "alternating" statistics. In the much more limited empirical applications of ALAAMs to date, the problem of near-degeneracy, although theoretically expected, appears to have been less of an issue. In this work I present a comprehensive survey of ALAAM applications, showing that this model has to date only been used with relatively small networks, in which near-degeneracy does not appear to be a problem. I show near-degeneracy does occur in simple ALAAM models of larger empirical networks, define some geometrically weighted ALAAM statistics analogous to those for ERGM, and demonstrate that models with these statistics do not suffer from near-degeneracy and hence can be estimated where they could not be with the simple statistics.
Untargeted metabolomic profiling through liquid chromatography-mass spectrometry (LC-MS) measures a vast array of metabolites within biospecimens, advancing drug development, disease diagnosis, and risk prediction. However, the low throughput of LC-MS poses a major challenge for biomarker discovery, annotation, and experimental comparison, necessitating the merging of multiple datasets. Current data pooling methods encounter practical limitations due to their vulnerability to data variations and hyperparameter dependence. Here we introduce GromovMatcher, a flexible and user-friendly algorithm that automatically combines LC-MS datasets using optimal transport. By capitalizing on feature intensity correlation structures, GromovMatcher delivers superior alignment accuracy and robustness compared to existing approaches. This algorithm scales to thousands of features requiring minimal hyperparameter tuning. Applying our method to experimental patient studies of liver and pancreatic cancer, we discover shared metabolic features related to patient alcohol intake, demonstrating how GromovMatcher facilitates the search for biomarkers associated with lifestyle risk factors linked to several cancer types.