Objective: Quantitative $T_1\rho$ imaging has potential for assessment of biochemical alterations of liver pathologies. Deep learning methods have been employed to accelerate quantitative $T_1\rho$ imaging. To employ artificial intelligence-based quantitative imaging methods in complicated clinical environment, it is valuable to estimate the uncertainty of the predicated $T_1\rho$ values to provide the confidence level of the quantification results. The uncertainty should also be utilized to aid the post-hoc quantitative analysis and model learning tasks. Approach: To address this need, we propose a parametric map refinement approach for learning-based $T_1\rho$ mapping and train the model in a probabilistic way to model the uncertainty. We also propose to utilize the uncertainty map to spatially weight the training of an improved $T_1\rho$ mapping network to further improve the mapping performance and to remove pixels with unreliable $T_1\rho$ values in the region of interest. The framework was tested on a dataset of 51 patients with different liver fibrosis stages. Main results: Our results indicate that the learning-based map refinement method leads to a relative mapping error of less than 3% and provides uncertainty estimation simultaneously. The estimated uncertainty reflects the actual error level, and it can be used to further reduce relative $T_1\rho$ mapping error to 2.60% as well as removing unreliable pixels in the region of interest effectively. Significance: Our studies demonstrate the proposed approach has potential to provide a learning-based quantitative MRI system for trustworthy $T_1\rho$ mapping of the liver.
This paper studies $k$-claw-free graphs, exploring the connection between an extremal combinatorics question and the power of a convex program in approximating the maximum-weight independent set in this graph class. For the extremal question, we consider the notion, that we call \textit{conditional $\chi$-boundedness} of a graph: Given a graph $G$ that is assumed to contain an independent set of a certain (constant) size, we are interested in upper bounding the chromatic number in terms of the clique number of $G$. This question, besides being interesting on its own, has algorithmic implications (which have been relatively neglected in the literature) on the performance of SDP relaxations in estimating the value of maximum-weight independent set. For $k=3$, Chudnovsky and Seymour (JCTB 2010) prove that any $3$-claw-free graph $G$ with an independent set of size three must satisfy $\chi(G) \leq 2 \omega(G)$. Their result implies a factor $2$-estimation algorithm for the maximum weight independent set via an SDP relaxation (providing the first non-trivial result for maximum-weight independent set in such graphs via a convex relaxation). An obvious open question is whether a similar conditional $\chi$-boundedness phenomenon holds for any $k$-claw-free graph. Our main result answers this question negatively. We further present some evidence that our construction could be useful in studying more broadly the power of convex relaxations in the context of approximating maximum weight independent set in $k$-claw free graphs. In particular, we prove a lower bound on families of convex programs that are stronger than known convex relaxations used algorithmically in this context.
Originating in Girard's Linear logic, Ehrhard and Regnier's Taylor expansion of $\lambda$-terms has been broadly used as a tool to approximate the terms of several variants of the $\lambda$-calculus. Many results arise from a Commutation theorem relating the normal form of the Taylor expansion of a term to its B\"ohm tree. This led us to consider extending this formalism to the infinitary $\lambda$-calculus, since the $\Lambda_{\infty}^{001}$ version of this calculus has B\"ohm trees as normal forms and seems to be the ideal framework to reformulate the Commutation theorem. We give a (co-)inductive presentation of $\Lambda_{\infty}^{001}$. We define a Taylor expansion on this calculus, and state that the infinitary $\beta$-reduction can be simulated through this Taylor expansion. The target language is the usual resource calculus, and in particular the resource reduction remains finite, confluent and terminating. Finally, we state the generalised Commutation theorem and use our results to provide simple proofs of some normalisation and confluence properties in the infinitary $\lambda$-calculus.
Masked Autoencoders (MAE) play a pivotal role in learning potent representations, delivering outstanding results across various 3D perception tasks essential for autonomous driving. In real-world driving scenarios, it's commonplace to deploy multiple sensors for comprehensive environment perception. While integrating multi-modal features from these sensors can produce rich and powerful features, there is a noticeable gap in MAE methods addressing this integration. This research delves into multi-modal Masked Autoencoders tailored for a unified representation space in autonomous driving, aiming to pioneer a more efficient fusion of two distinct modalities. To intricately marry the semantics inherent in images with the geometric intricacies of LiDAR point clouds, the UniM$^2$AE is proposed. This model stands as a potent yet straightforward, multi-modal self-supervised pre-training framework, mainly consisting of two designs. First, it projects the features from both modalities into a cohesive 3D volume space, ingeniously expanded from the bird's eye view (BEV) to include the height dimension. The extension makes it possible to back-project the informative features, obtained by fusing features from both modalities, into their native modalities to reconstruct the multiple masked inputs. Second, the Multi-modal 3D Interactive Module (MMIM) is invoked to facilitate the efficient inter-modal interaction during the interaction process. Extensive experiments conducted on the nuScenes Dataset attest to the efficacy of UniM$^2$AE, indicating enhancements in 3D object detection and BEV map segmentation by 1.2\%(NDS) and 6.5\% (mIoU), respectively. Code is available at //github.com/hollow-503/UniM2AE.
An independent set in a graph $G$ is a set $S$ of pairwise non-adjacent vertices in $G$. A family $\mathcal{F}$ of independent sets in $G$ is called a $k$-independence covering family if for every independent set $I$ in $G$ of size at most $k$, there exists an $S \in \mathcal{F}$ such that $I \subseteq S$. Lokshtanov et al. [ACM Transactions on Algorithms, 2018] showed that graphs of degeneracy $d$ admit $k$-independence covering families of size $\binom{k(d+1)}{k} \cdot 2^{o(kd)} \cdot \log n$, and used this result to design efficient parameterized algorithms for a number of problems, including STABLE ODD CYCLE TRANSVERSAL and STABLE MULTICUT. In light of the results of Lokshtanov et al. it is quite natural to ask whether even more general families of graphs admit $k$-independence covering families of size $f(k)n^{O(1)}$. Graphs that exclude a complete bipartite graph $K_{d+1,d+1}$ with $d+1$ vertices on both sides as a subgraph, called $K_{d+1,d+1}$-free graphs, are a frequently considered generalization of $d$-degenerate graphs. This motivates the question whether $K_{d,d}$-free graphs admit $k$-independence covering families of size $f(k,d)n^{O(1)}$. Our main result is a resounding "no" to this question -- specifically we prove that even $K_{2,2}$-free graphs (or equivalently $C_4$-free graphs) do not admit $k$-independence covering families of size $f(k)n^{\frac{k}{4}-\epsilon}$.
The spreading of prion proteins is at the basis of brain neurodegeneration. The paper deals with the numerical modelling of the misfolding process of $\alpha$-synuclein in Parkinson's disease. We introduce and analyze a discontinuous Galerkin method for the semi-discrete approximation of the Fisher-Kolmogorov (FK) equation that can be employed to model the process. We employ a discontinuous Galerkin method on polygonal and polyhedral grids (PolyDG) for space discretization, which allows us to accurately simulate the wavefronts typically observed in the prionic spreading. We prove stability and a priori error estimates for the semi-discrete formulation. Next, we use a Crank-Nicolson scheme to advance in time. For the numerical verification of our numerical model, we first consider a manufactured solution, and then we consider a case with wavefront propagation in two-dimensional polygonal grids. Next, we carry out a simulation of $\alpha$-synuclein spreading in a two-dimensional brain slice in the sagittal plane with a polygonal agglomerated grid that takes full advantage of the flexibility of PolyDG approximation. Finally, we present a simulation in a three-dimensional patient-specific brain geometry reconstructed from magnetic resonance images.
Originating in Girard's Linear logic, Ehrhard and Regnier's Taylor expansion of $\lambda$-terms has been broadly used as a tool to approximate the terms of several variants of the $\lambda$-calculus. Many results arise from a Commutation theorem relating the normal form of the Taylor expansion of a term to its B\"ohm tree. This led us to consider extending this formalism to the infinitary $\lambda$-calculus, since the $\Lambda_{\infty}^{001}$ version of this calculus has B\"ohm trees as normal forms and seems to be the ideal framework to reformulate the Commutation theorem. We give a (co-)inductive presentation of $\Lambda_{\infty}^{001}$. We define a Taylor expansion on this calculus, and state that the infinitary $\beta$-reduction can be simulated through this Taylor expansion. The target language is the usual resource calculus, and in particular the resource reduction remains finite, confluent and terminating. Finally, we state the generalised Commutation theorem and use our results to provide simple proofs of some normalisation and confluence properties in the infinitary $\lambda$-calculus.
$\textbf{Motivation:}$ Small $p$-values are often required to be accurately estimated in large-scale genomic studies for the adjustment of multiple hypothesis tests and the ranking of genomic features based on their statistical significance. For those complicated test statistics whose cumulative distribution functions are analytically intractable, existing methods usually do not work well with small $p$-values due to lack of accuracy or computational restrictions. We propose a general approach for accurately and efficiently estimating small $p$-values for a broad range of complicated test statistics based on the principle of the cross-entropy method and Markov chain Monte Carlo sampling techniques. $\textbf{Results:}$ We evaluate the performance of the proposed algorithm through simulations and demonstrate its application to three real-world examples in genomic studies. The results show that our approach can accurately evaluate small to extremely small $p$-values (e.g. $10^{-6}$ to $10^{-100}$). The proposed algorithm is helpful for the improvement of some existing test procedures and the development of new test procedures in genomic studies.
For a skew polynomial ring $R=A[X;\theta,\delta]$ where $A$ is a commutative frobenius ring, $\theta$ an endomorphism of $A$ and $\delta$ a $\theta$-derivation of $A$, we consider cyclic left module codes $\mathcal{C}=Rg/Rf\subset R/Rf$ where $g$ is a left and right divisor of $f$ in $R$. In this paper we derive a parity check matrix when $A$ is a finite commutative frobenius ring using only the framework of skew polynomial rings. We consider rings $A=B[a_1,\ldots,a_s]$ which are free $B$-algebras where the restriction of $\delta$ and $\theta$ to $B$ are polynomial maps. If a Gr\"obner basis can be computed over $B$, then we show that all Euclidean and Hermitian dual-containing codes $\mathcal{C}=Rg/Rf\subset R/Rf$ can be computed using a Gr\"obner basis. We also give an algorithm to test if the dual code is again a cyclic left module code. We illustrate our approach for rings of order $4$ with non-trivial endomorphism and the Galois ring of characteristic $4$.
Echocardiography (echo) is an ultrasound imaging modality that is widely used for various cardiovascular diagnosis tasks. Due to inter-observer variability in echo-based diagnosis, which arises from the variability in echo image acquisition and the interpretation of echo images based on clinical experience, vision-based machine learning (ML) methods have gained popularity to act as secondary layers of verification. For such safety-critical applications, it is essential for any proposed ML method to present a level of explainability along with good accuracy. In addition, such methods must be able to process several echo videos obtained from various heart views and the interactions among them to properly produce predictions for a variety of cardiovascular measurements or interpretation tasks. Prior work lacks explainability or is limited in scope by focusing on a single cardiovascular task. To remedy this, we propose a General, Echo-based, Multi-Level Transformer (GEMTrans) framework that provides explainability, while simultaneously enabling multi-video training where the inter-play among echo image patches in the same frame, all frames in the same video, and inter-video relationships are captured based on a downstream task. We show the flexibility of our framework by considering two critical tasks including ejection fraction (EF) and aortic stenosis (AS) severity detection. Our model achieves mean absolute errors of 4.15 and 4.84 for single and dual-video EF estimation and an accuracy of 96.5 % for AS detection, while providing informative task-specific attention maps and prototypical explainability.
Preconditioning is essential in iterative methods for solving linear systems of equations. We study a nonclassic matrix condition number, the $\omega$-condition number, in the context of optimal conditioning for low rank updating of positive definite matrices. For a positive definite matrix, this condition measure is the ratio of the arithmetic and geometric means of the eigenvalues. In particular, we concentrate on linear systems with low rank updates of positive definite matrices which are close to singular. These systems arise in the contexts of nonsmooth Newton methods using generalized Jacobians. We derive an explicit formula for the optimal $\omega$-preconditioned update in this framework. Evaluating or estimating the classical condition number $\kappa$ can be expensive. We show that the $\omega$-condition number can be evaluated exactly following a Cholesky or LU factorization and it estimates the actual condition of a linear system significantly better. Moreover, our empirical results show a significant decrease in the number of iterations required for a requested accuracy in the residual during an iterative method, i.e., these results confirm the efficacy of using the $\omega$-condition number compared to the classical condition number.