The identification of the Purkinje conduction system in the heart is a challenging task, yet essential for a correct definition of cardiac digital twins for precision cardiology. Here, we propose a probabilistic approach for identifying the Purkinje network from non-invasive clinical data such as the standard electrocardiogram (ECG). We use cardiac imaging to build an anatomically accurate model of the ventricles; we algorithmically generate a rule-based Purkinje network tailored to the anatomy; we simulate physiological electrocardiograms with a fast model; we identify the geometrical and electrical parameters of the Purkinje-ECG model with Bayesian optimization and approximate Bayesian computation. The proposed approach is inherently probabilistic and generates a population of plausible Purkinje networks, all fitting the ECG within a given tolerance. In this way, we can estimate the uncertainty of the parameters, thus providing reliable predictions. We test our methodology in physiological and pathological scenarios, showing that we are able to accurately recover the ECG with our model. We propagate the uncertainty in the Purkinje network parameters in a simulation of conduction system pacing therapy. Our methodology is a step forward in creation of digital twins from non-invasive data in precision medicine. An open source implementation can be found at //github.com/fsahli/purkinje-learning
This manuscript investigates the problem of locational complexity, a type of complexity that emanates from a companys territorial strategy. Using an entropy-based measure for supply chain structural complexity ( pars-complexity), we develop a theoretical framework for analysing the effects of locational complexity on the profitability of service/manufacturing networks. The proposed model is used to shed light on the reasons why network restructuring strategies may result ineffective at reducing complexity-related costs. Our contribution is three-fold. First, we develop a novel mathematical formulation of a facility location problem that integrates the pars-complexity measure in the decision process. Second, using this model, we propose a decomposition of the penalties imposed by locational complexity into (a) an intrinsic cost of structural complexity; and (b) an avoidable cost of ignoring such complexity in the decision process. Such a decomposition is a valuable tool for identifying more effective measures for tackling locational complexity, moreover, it has allowed us to provide an explanation to the so-called addiction to growth within the locational context. Finally, we propose three alternative strategies that attempt to mimic different approaches used in practice by companies that have engaged in network restructuring processes. The impact of those approaches is evaluated through extensive numerical experiments. Our experimental results suggest that network restructuring efforts that are not accompanied by a substantial reduction on the target market of the company, fail at reducing complexity-related costs and, therefore, have a limited impact on the companys profitability.
A rectangulation is a decomposition of a rectangle into finitely many rectangles. Via natural equivalence relations, rectangulations can be seen as combinatorial objects with a rich structure, with links to lattice congruences, flip graphs, polytopes, lattice paths, Hopf algebras, etc. In this paper, we first revisit the structure of the respective equivalence classes: weak rectangulations that preserve rectangle-segment adjacencies, and strong rectangulations that preserve rectangle-rectangle adjacencies. We thoroughly investigate posets defined by adjacency in rectangulations of both kinds, and unify and simplify known bijections between rectangulations and permutation classes. This yields a uniform treatment of mappings between permutations and rectangulations that unifies the results from earlier contributions, and emphasizes parallelism and differences between the weak and the strong cases. Then, we consider the special case of guillotine rectangulations, and prove that they can be characterized - under all known mappings between permutations and rectangulations - by avoidance of two mesh patterns that correspond to "windmills" in rectangulations. This yields new permutation classes in bijection with weak guillotine rectangulations, and the first known permutation class in bijection with strong guillotine rectangulations. Finally, we address enumerative issues and prove asymptotic bounds for several families of strong rectangulations.
Large Language Models (LLMs) and, more specifically, the Generative Pre-Trained Transformers (GPT) can help stakeholders in climate action explore digital knowledge bases and extract and utilize climate action knowledge in a sustainable manner. However, LLMs are "probabilistic models of knowledge bases" that excel at generating convincing texts but cannot be entirely relied upon due to the probabilistic nature of the information produced. This brief report illustrates the problem space with examples of LLM responses to some of the questions of relevance to climate action.
Neural operators (NO) are discretization invariant deep learning methods with functional output and can approximate any continuous operator. NO have demonstrated the superiority of solving partial differential equations (PDEs) over other deep learning methods. However, the spatial domain of its input function needs to be identical to its output, which limits its applicability. For instance, the widely used Fourier neural operator (FNO) fails to approximate the operator that maps the boundary condition to the PDE solution. To address this issue, we propose a novel framework called resolution-invariant deep operator (RDO) that decouples the spatial domain of the input and output. RDO is motivated by the Deep operator network (DeepONet) and it does not require retraining the network when the input/output is changed compared with DeepONet. RDO takes functional input and its output is also functional so that it keeps the resolution invariant property of NO. It can also resolve PDEs with complex geometries whereas NO fail. Various numerical experiments demonstrate the advantage of our method over DeepONet and FNO.
This work has been motivated by a longitudinal data set on HIV CD4 T+ cell counts from Livingstone district, Zambia. The corresponding histogram plots indicate lack of symmetry in the marginal distributions and the pairwise scatter plots show non-elliptical dependence patterns. The standard linear mixed model for longitudinal data fails to capture these features. Thus it seems appropriate to consider a more general framework for modeling such data. In this article, we consider generalized linear mixed models (GLMM) for the marginals (e.g. Gamma mixed model), and temporal dependency of the repeated measurements is modeled by the copula corresponding to some skew-elliptical distributions (like skew-normal/skew-t). Our proposed class of copula based mixed models simultaneously takes into account asymmetry, between-subject variability and non-standard temporal dependence, and hence can be considered extensions to the standard linear mixed model based on multivariate normality. We estimate the model parameters using the IFM (inference function of margins) method, and also describe how to obtain standard errors of the parameter estimates. We investigate the finite sample performance of our procedure with extensive simulation studies involving skewed and symmetric marginal distributions and several choices of the copula. We finally apply our models to the HIV data set and report the findings.
We propose a new numerical domain decomposition method for solving elliptic equations on compact Riemannian manifolds. One advantage of this method is its ability to bypass the need for global triangulations or grids on the manifolds. Additionally, it features a highly parallel iterative scheme. To verify its efficacy, we conduct numerical experiments on some $4$-dimensional manifolds without and with boundary.
A central challenge in the verification of quantum computers is benchmarking their performance as a whole and demonstrating their computational capabilities. In this work, we find a universal model of quantum computation, Bell sampling, that can be used for both of those tasks and thus provides an ideal stepping stone towards fault-tolerance. In Bell sampling, we measure two copies of a state prepared by a quantum circuit in the transversal Bell basis. We show that the Bell samples are classically intractable to produce and at the same time constitute what we call a circuit shadow: from the Bell samples we can efficiently extract information about the quantum circuit preparing the state, as well as diagnose circuit errors. In addition to known properties that can be efficiently extracted from Bell samples, we give two new and efficient protocols, a test for the depth of the circuit and an algorithm to estimate a lower bound to the number of T gates in the circuit. With some additional measurements, our algorithm learns a full description of states prepared by circuits with low T-count.
With the adoption of machine learning into routine clinical practice comes the need for Explainable AI methods tailored to medical applications. Shapley values have sparked wide interest for locally explaining models. Here, we demonstrate their interpretation strongly depends on both the summary statistic and the estimator for it, which in turn define what we identify as an 'anchor point'. We show that the convention of using a mean anchor point may generate misleading interpretations for survival analysis and introduce median-SHAP, a method for explaining black-box models predicting individual survival times.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.
Deep learning is usually described as an experiment-driven field under continuous criticizes of lacking theoretical foundations. This problem has been partially fixed by a large volume of literature which has so far not been well organized. This paper reviews and organizes the recent advances in deep learning theory. The literature is categorized in six groups: (1) complexity and capacity-based approaches for analyzing the generalizability of deep learning; (2) stochastic differential equations and their dynamic systems for modelling stochastic gradient descent and its variants, which characterize the optimization and generalization of deep learning, partially inspired by Bayesian inference; (3) the geometrical structures of the loss landscape that drives the trajectories of the dynamic systems; (4) the roles of over-parameterization of deep neural networks from both positive and negative perspectives; (5) theoretical foundations of several special structures in network architectures; and (6) the increasingly intensive concerns in ethics and security and their relationships with generalizability.