{mayi_des}
The increasing reliance on Computed Tomography Pulmonary Angiography (CTPA) for Pulmonary Embolism (PE) diagnosis presents challenges and a pressing need for improved diagnostic solutions. The primary objective of this study is to leverage deep learning techniques to enhance the Computer Assisted Diagnosis (CAD) of PE. With this aim, we propose a classifier-guided detection approach that effectively leverages the classifier's probabilistic inference to direct the detection predictions, marking a novel contribution in the domain of automated PE diagnosis. Our classification system includes an Attention-Guided Convolutional Neural Network (AG-CNN) that uses local context by employing an attention mechanism. This approach emulates a human expert's attention by looking at both global appearances and local lesion regions before making a decision. The classifier demonstrates robust performance on the FUMPE dataset, achieving an AUROC of 0.927, sensitivity of 0.862, specificity of 0.879, and an F1-score of 0.805 with the Inception-v3 backbone architecture. Moreover, AG-CNN outperforms the baseline DenseNet-121 model, achieving an 8.1% AUROC gain. While previous research has mostly focused on finding PE in the main arteries, our use of cutting-edge object detection models and ensembling techniques greatly improves the accuracy of detecting small embolisms in the peripheral arteries. Finally, our proposed classifier-guided detection approach further refines the detection metrics, contributing new state-of-the-art to the community: mAP$_{50}$, sensitivity, and F1-score of 0.846, 0.901, and 0.779, respectively, outperforming the former benchmark with a significant 3.7% improvement in mAP$_{50}$. Our research aims to elevate PE patient care by integrating AI solutions into clinical workflows, highlighting the potential of human-AI collaboration in medical diagnostics.
We present a fully-integrated lattice Boltzmann (LB) method for fluid--structure interaction (FSI) simulations that efficiently models deformable solids in complex suspensions and active systems. Our Eulerian method (LBRMT) couples finite-strain solids to the LB fluid on the same fixed computational grid with the reference map technique (RMT). An integral part of the LBRMT is a new LB boundary condition for moving deformable interfaces across different densities. With this fully Eulerian solid--fluid coupling, the LBRMT is well-suited for parallelization and simulating multi-body contact without remeshing or extra meshes. We validate its accuracy via a benchmark of a deformable solid in a lid-driven cavity, then showcase its versatility through examples of soft solids rotating and settling. With simulations of complex suspensions mixing, we highlight potentials of the LBRMT for studying collective behavior in soft matter and biofluid dynamics.
This work introduces an extension of the high order, single stage Lax-Wendroff Flux Reconstruction (LWFR) of Babbar et al., JCP (2022) to solve second order time-dependent partial differential equations in conservative form on curvilinear meshes. The method uses BR1 scheme to reduce the system to first order so that the earlier LWFR scheme can be applied. The work makes use of the embedded error-based time stepping introduced in Babbar, Chandrashekar (2024) which becomes particularly relevant in the absence of CFL stability limit for parabolic equations. The scheme is verified to show optimal order convergence and validated with transonic flow over airfoil and unsteady flow over cylinder.
Colocalization analyses assess whether two traits are affected by the same or distinct causal genetic variants in a single gene region. A class of Bayesian colocalization tests are now routinely used in practice; for example, for genetic analyses in drug development pipelines. In this work, we consider an alternative frequentist approach to colocalization testing that examines the proportionality of genetic associations with each trait. The proportional colocalization approach uses markedly different assumptions to Bayesian colocalization tests, and therefore can provide valuable complementary evidence in cases where Bayesian colocalization results are inconclusive or sensitive to priors. We propose a novel conditional test of proportional colocalization, prop-coloc-cond, that aims to account for the uncertainty in variant selection, in order to recover accurate type I error control. The test can be implemented straightforwardly, requiring only summary data on genetic associations. Simulation evidence and an empirical investigation into GLP1R gene expression demonstrates how tests of proportional colocalization can offer important insights in conjunction with Bayesian colocalization tests.
In this paper, a fifth-order moment-based Hermite weighted essentially non-oscillatory scheme with unified stencils (termed as HWENO-U) is proposed for hyperbolic conservation laws. The main idea of the HWENO-U scheme is to modify the first-order moment by a HWENO limiter only in the time discretizations using the same information of spatial reconstructions, in which the limiter not only overcomes spurious oscillations well, but also ensures the stability of the fully-discrete scheme. For the HWENO reconstructions, a new scale-invariant nonlinear weight is designed by incorporating only the integral average values of the solution, which keeps all properties of the original one while is more robust for simulating challenging problems with sharp scale variations. Compared with previous HWENO schemes, the advantages of the HWENO-U scheme are: (1) a simpler implemented process involving only a single HWENO reconstruction applied throughout the entire procedures without any modifications for the governing equations; (2) increased efficiency by utilizing the same candidate stencils, reconstructed polynomials, and linear and nonlinear weights in both the HWENO limiter and spatial reconstructions; (3) reduced problem-specific dependencies and improved rationality, as the nonlinear weights are identical for the function $u$ and its non-zero multiple $\zeta u$. Besides, the proposed scheme retains the advantages of previous HWENO schemes, including compact reconstructed stencils and the utilization of artificial linear weights. Extensive benchmarks are carried out to validate the accuracy, efficiency, resolution, and robustness of the proposed scheme.
We give an explicit solution formula for the polynomial regression problem in terms of Schur polynomials and Vandermonde determinants. We thereby generalize the work of Chang, Deng, and Floater to the case of model functions of the form $\sum _{i=1}^{n} a_{i} x^{d_{i}}$ for some integer exponents $d_{1} >d_{2} >\dotsc >d_{n} \geq 0$ and phrase the results using Schur polynomials. Even though the solution circumvents the well-known problems with the forward stability of the normal equation, it is only of practical value if $n$ is small because the number of terms in the formula grows rapidly with the number $m$ of data points. The formula can be evaluated essentially without rounding.
We present novel results for fast mixing of Glauber dynamics using the newly introduced and powerful Spectral Independence method from [Anari, Liu, Oveis-Gharan: FOCS 2020]. We mainly focus on the Hard-core model and the Ising model. We obtain bounds for fast mixing with the parameters expressed in terms of the spectral radius of the adjacency matrix, improving on the seminal work in [Hayes: FOCS 2006]. Furthermore, we go beyond the adjacency matrix and establish -- for the first time -- rapid mixing results for Glauber dynamics expressed in terms of the spectral radius of the Hashimoto non-backtracking matrix of the underlying graph $G$. Working with the non-backtracking spectrum is extremely challenging, but also more desirable. Its eigenvalues are less correlated with the high-degree vertices than those of the adjacency matrix and express more accurately invariants of the graph such as the growth rate. Our results require ``weak normality" from the Hashimoto matrix. This condition is mild and allows us to obtain very interesting bound. We study the pairwise influence matrix ${I}^{\Lambda,\tau}_{G}$ by exploiting the connection between the matrix and the trees of self-avoiding walks, however, we go beyond the standard treatment of the distributional recursions. The common framework that underlies our techniques we call the topological method. Our approach is novel and gives new insights into how to establish Spectral Independence for Gibbs distributions. More importantly, it allows us to derive new -- improved -- rapid mixing bounds for Glauber dynamics on distributions such as the Hard-core model and the Ising model for graphs that the spectral radius is smaller than the maximum degree.
In this article we aim to obtain the Fisher Riemann geodesics for nonparametric families of probability densities as a weak limit of the parametric case with increasing number of parameters.
We propose a simple empirical representation of expectations such that: For a number of samples above a certain threshold, drawn from any probability distribution with finite fourth-order statistic, the proposed estimator outperforms the empirical average when tested against the actual population, with respect to the quadratic loss. For datasets smaller than this threshold, the result still holds, but for a class of distributions determined by their first four statistics. Our approach leverages the duality between distributionally robust and risk-averse optimization.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.
The Evidential regression network (ENet) estimates a continuous target and its predictive uncertainty without costly Bayesian model averaging. However, it is possible that the target is inaccurately predicted due to the gradient shrinkage problem of the original loss function of the ENet, the negative log marginal likelihood (NLL) loss. In this paper, the objective is to improve the prediction accuracy of the ENet while maintaining its efficient uncertainty estimation by resolving the gradient shrinkage problem. A multi-task learning (MTL) framework, referred to as MT-ENet, is proposed to accomplish this aim. In the MTL, we define the Lipschitz modified mean squared error (MSE) loss function as another loss and add it to the existing NLL loss. The Lipschitz modified MSE loss is designed to mitigate the gradient conflict with the NLL loss by dynamically adjusting its Lipschitz constant. By doing so, the Lipschitz MSE loss does not disturb the uncertainty estimation of the NLL loss. The MT-ENet enhances the predictive accuracy of the ENet without losing uncertainty estimation capability on the synthetic dataset and real-world benchmarks, including drug-target affinity (DTA) regression. Furthermore, the MT-ENet shows remarkable calibration and out-of-distribution detection capability on the DTA benchmarks.